Category Archives: Uncategorized

Athletic Training

Athletic training has been recognized by the American Medical Association (AMA) as an allied health care profession since June 1991.

There are five domains of athletic training listed in the 7th edition (2015) of the Athletic Training Practice Analysis:

  • Injury and Illness Prevention and Wellness Promotion
  • Examination, Assessment and Diagnosis
  • Immediate and Emergency Care
  • Therapeutic Intervention
  • Healthcare Administration and Professional Responsibility

An athletic trainer functions as an integral member of the health care team in clinics, secondary schools, colleges and universities, professional sports programs, and other athletic health care settings.

Athletic training in the United States began in October 1881 when Harvard University hired James Robinson to work conditioning their football team. At the time, the term “athletic trainer” meant one who worked with track and field athletes. Robinson had worked with track and field athletes and the name “athletic trainer” transferred to those working on conditioning these football players and later other athletes. Athletic trainers began to treat and rehabilitate injuries in order to keep the athletes participating. The first major text on athletic training and the care of athletic injuries was called Athletic Training (later changed to The Trainer’s Bible) written in 1917 by Samuel E. Bilik. Early athletic trainers had “no technical knowledge, their athletic training techniques usually consisted of a rub, the application of some type of counterirritant, and occasionally the prescription of various home remedies and poultices”. In 1918, Chuck Cramer started the Cramer Chemical Company (now Cramer Products) that produced a line of products used by athletic trainers and began publishing a newsletter in 1932 entitled The First Aider.

An organization named the National Athletic Trainers’ Association (NATA) was founded in 1938 and folded in 1944. Another NATA was founded in 1950 and still exists. The first athletic training curriculum approved by NATA was in 1959 and the amount of athletic training programs began to grow throughout colleges and universities in the United States. In the early development of the major, athletic training was geared more towards prepping the student for teaching at the secondary level, emphasizing on health and physical education. This program was first introduced at an undergraduate level in 1969 to the schools of Mankato State University, Indiana State University, Lamar University, and the University of New Mexico.

Through the years athletic training has evolved to be defined as “health care professionals who specialize in preventing, recognizing, managing, and rehabilitating injures”. During the 1970s the NATA Professional Education Committee formed a list of objectives to define athletic training as a major course of study and to eliminate it as a secondary-level teaching credential. By June 1982, there were nine NATA-approved graduate athletic training education programs. On July 1, 1986, this work was used to implement athletic training as a major course of study in at least 10 colleges and universities, and to only start the development of the major in a handful of others.

Once athletic training was recognized as an allied health profession the process of accrediting programs began. NATA’s Professional Education Committee (PEC) was the first to take on this role of approving athletic training educational programs. The AMA’s Committee on Allied Health Education and Accreditation (CAHEA) was given the responsibility in 1993 to develop requirements for the programs of entry-level athletic trainers. At this time all programs had to go through the CAHEA accreditation process. A year later CAHEA was broken up and replaced with the Commission on Accreditation of Allied Health Education Programs (CAAHEP), which then lead the accreditation process. In 2003 JRC-AT, Joint Review Committee on Athletic Training completely took over the process and became an independent accrediting agency like all other allied health professions had. Three years later JRC-AT officially became the Committee for Accreditation of Athletic Training Education (CAATE), which is fully in charge of accrediting athletic training programs in the United States. NATA produced the NATABOC in 1969 in order to implement a certification process for the profession for an entry-level athletic trainer. In 1989, became an independent non-profit corporation and soon later changed its name to the Board of Certification (BOC).

Diet

In nutrition, diet is the sum of food consumed by a person or other organism. The word diet often implies the use of specific intake of nutrition for health or weight-management reasons (with the two often being related). Although humans are omnivores, each culture and each person holds some food preferences or some food taboos. This may be due to personal tastes or ethical reasons. Individual dietary choices may be more or less healthy.

Diet is the practice of eating food in a regulated and supervised fashion to decrease, maintain, or increase body weight. In other words, it is conscious control or restriction of the diet. A restricted diet is often used by those who are overweight or obese, sometimes in combination with physical exercise, to reduce body weight. Some people follow a diet to gain weight (usually in the form of muscle). Diets can also be used to maintain a stable body weight and improve health. In particular, diets can be designed to prevent or treat diabetes.

Diets to promote weight loss can be categorized as: low-fat, low-carbohydrate, low-calorie, very low calorie and more recently flexible dieting. A meta-analysis of six randomized controlled trials found no difference between low-calorie, low-carbohydrate, and low-fat diets, with a 2–4 kilogram weight loss over 12–18 months in all studies. At two years, all calorie-reduced diet types cause equal weight loss irrespective of the macronutrients emphasized. In general, the most effective diet is any which reduces calorie consumption.

A study published in American Psychologist found that short-term dieting involving “severe restriction of calorie intake” does not lead to “sustained improvements in weight and health for the majority of individuals”. Other studies have found that the average individual maintains some weight loss after dieting. Weight loss by dieting, while of benefit to those classified as unhealthy, may slightly increase the mortality rate for individuals who are otherwise healthy. Complete nutrition requires ingestion and absorption of vitamins, minerals, and food energy in the form of carbohydrates, proteins, and fats. Dietary habits and choices play a significant role in the quality of life, health and longevity.

A particular diet may be chosen to seek weight loss or weight gain. Changing a subject’s dietary intake, or “going on a diet”, can change the energy balance and increase or decrease the amount of fat stored by the body. Some foods are specifically recommended, or even altered, for conformity to the requirements of a particular diet. These diets are often recommended in conjunction with exercise. Specific weight loss programs can be harmful to health, while others may be beneficial and can thus be coined as healthy diets. The terms “healthy diet” and “diet for weight management” are often related, as the two promote healthy weight management. Having a healthy diet is a way to prevent health problems, and will provide the body with the right balance of vitamins, minerals, and other nutrients.

A healthy diet may improve or maintain optimal health. In developed countries, affluence enables unconstrained caloric intake and possibly inappropriate food choices. Health agencies recommend that people maintain a normal weight by limiting consumption of energy-dense foods and sugary drinks, eating plant-based food, limiting consumption of red and processed meat, and limiting alcohol intake.

Many people choose to forgo food from animal sources to varying degrees  for health reasons, issues surrounding morality, or to reduce their personal impact on the environment, although some of the public assumptions about which diets have lower impacts are known to be incorrect. Raw foodism is another contemporary trend. These diets may require tuning or supplementation such as vitamins to meet ordinary nutritional needs.

  • Low-fat

Low-fat diets involve the reduction of the percentage of fat in one’s diet. Calorie consumption is reduced because less fat is consumed. Diets of this type include NCEP Step I and II. A meta-analysis of 16 trials of 2–12 months’ duration found that low-fat diets (without intentional restriction of caloric intake) resulted in average weight loss of 3.2 kg (7.1 lb) over habitual eating.

  • Low-carbohydrate

Low-carbohydrate diets such as Atkins and Protein Power are relatively high in protein and fats. Low-carbohydrate diets are sometimes ketogenic (i.e., they restrict carbohydrate intake sufficiently to cause ketosis).

  • Low-calorie

Low-calorie diets usually produce an energy deficit of 500–1,000 calories per day, which can result in a 0.5 to 1 kilogram (1.1 to 2.2 pounds) weight loss per week. Some of the most commonly used low-calorie diets include DASH diet and Weight Watchers. The National Institutes of Health reviewed 34 randomized controlled trials to determine the effectiveness of low-calorie diets. They found that these diets lowered total body mass by 8% in the short term, over 3–12 months. Women doing low-calorie diets should have at least 1,200 calories per day. Men should have at least 1,800 calories per day.

  • Very low-calorie

Very low calorie diets provide 200–800 calories per day, maintaining protein intake but limiting calories from both fat and carbohydrates. They subject the body to starvation and produce an average loss of 1.5–2.5 kg (3.3–5.5 lb) per week. “2-4-6-8”, a popular diet of this variety, follows a four-day cycle in which only 200 calories are consumed the first day, 400 the second day, 600 the third day, 800 the fourth day, and then totally fasting, after which the cycle repeats. These diets are not recommended for general use as they are associated with adverse side effects such as loss of lean muscle mass, increased risks of gout, and electrolyte imbalances. People attempting these diets must be monitored closely by a physician to prevent complications.

  • Detox

Detox diets claim to eliminate “toxins” from the human body rather than claiming to cause weight loss. Many of these use herbs or celery and other juicy low-calorie vegetables.

  • Religious

Religious prescription may be a factor in motivating people to adopt a specific restrictive diet. For example, the Biblical Book of Daniel (1:2-20, and 10:2-3) refers to a 10- or 21-day avoidance of foods (Daniel Fast) declared unclean by God in the laws of Moses. In modern versions of the Daniel Fast, food choices may be limited to whole grains, fruits, vegetables, pulses, nuts, seeds and oil. The Daniel Fast resembles the vegan diet in that it excludes foods of animal origin. The passages strongly suggest that the Daniel Fast will promote good health and mental performance.

  • Nutrition

Weight loss diets that manipulate the proportion of macronutrients (low-fat, low-carbohydrate, etc.) have been shown to be more effective than diets that maintain a typical mix of foods with smaller portions and perhaps some substitutions (e.g. low-fat milk, or less salad dressing). Extreme diets may, in some cases, lead to malnutrition. Nutritionists also agree on the importance of avoiding fats, especially saturated fats, to reduce weight and to be healthier. They also agree on the importance of reducing salt intake because foods including snacks, biscuits, and bread already contain ocean-salt, contributing to an excess of salt daily intake.

  • How the body eliminates fat

When the body is expending more energy than it is consuming (e.g. when exercising), the body’s cells rely on internally stored energy sources, such as complex carbohydrates and fats, for energy. The first source to which the body turns is glycogen (by glycogenolysis). Glycogen is a complex carbohydrate, 65% of which is stored in skeletal muscles and the remainder in the liver (totaling about 2,000 kcal in the whole body).

  • Weight loss groups

Some weight loss groups aim to make money, others work as charities. The former include Weight Watchers and Peertrainer. The latter include Overeaters Anonymous and groups run by local organizations. These organizations’ customs and practices differ widely. Some groups are modelled on twelve-step programs, while others are quite informal. Some groups advocate certain prepared foods or special menus, while others train dieters to make healthy choices from restaurant menus and while grocery-shopping and cooking.

  • Food diary

A 2008 study published in the American Journal of Preventive Medicine showed that dieters who kept a daily food diary (or diet journal), lost twice as much weight as those who did not keep a food log, suggesting that if you record your eating, you wouldn’t eat as many calories.

Nutrition and Pregnancy

Nutrition and pregnancy refers to the nutrient intake, and dietary planning that is undertaken before, during and after pregnancy. Nutrition of the fetus begins at conception. For this reason, the nutrition of the mother is important from before conception (probably several months before) as well as throughout pregnancy and breast feeding. An ever-increasing number of studies have shown that the nutrition of the mother will have an effect on the child, up to and including the risk for cancer, cardiovascular disease, hypertension and diabetes throughout life.

An inadequate or excessive amount of some nutrients may cause malformations or medical problems in the fetus, and neurological disorders and handicaps are a risk that is run by mothers who are malnourished. 23.8% of babies worldwide are estimated to be born with lower than optimal weights at birth due to lack of proper nutrition. Personal habits such as smoking, alcohol, caffeine, using certain medications and street drugs can negatively and irreversibly affect the development of the baby, which happens in the early stages of pregnancy.

Caffeine is sometimes assumed to cause harm to the unborn baby but there is not enough evidence so say if this is true. A recent review showed that more research is needed to show whether caffeine intake effects birth weight, preterm births, gestational diabetes and other outcomes.

Beneficial pre-pregnancy nutrients

As with most diets, there are chances of over-supplementing, however, as general advice, both state and medical recommendations are that mothers follow instructions listed on particular vitamin packaging as to the correct or recommended daily allowance (RDA). Daily prenatal use of iron substantially improves birth weight, potentially reducing the risk of Low birth weight.

  • Folic acid supplementation is recommended prior to conception, to prevent development of spina bifida and other neural tube defects. It should be taken as at least 0.4 mg/day throughout the first trimester of pregnancy, 0.6 mg/day through the pregnancy, and 0.5 mg/day while breastfeeding in addition to eating foods rich in folic acid such as green leafy vegetables.
  • Iodine levels are frequently too low in pregnant women, and iodine is necessary for normal thyroid function and mental development of the fetus, even cretinism. Pregnant women should take prenatal vitamins containing iodine.
  • Vitamin D levels vary with exposure to sunlight. While it was assumed that supplementation was necessary only in areas of high latitudes, recent studies of Vitamin D levels throughout the United States and many other countries have shown a large number of women with low levels. For this reason, there is a growing movement to recommend supplementation with 1000 mg of Vitamin D daily throughout pregnancy, vitamin D is necessary to prevent rickets, a disease causing weak bones.
  • A large number of pregnant women have been found to have low levels of vitamin B12, but supplementation has not yet been shown to improve pregnancy outcome or the health of the newborn, although there are suspicions.
  • Polyunsaturated fatty acids, specifically docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) are very beneficial for fetal development. Several studies have shown a small drop in preterm delivery and in low birth weight in mothers with higher intakes. The best dietary source of omega-3 fatty acids is oily fish. Some other omega-3 fatty acids not found in fish can be found in foods such as flaxseeds, walnuts, pumpkin seeds, and enriched eggs.
  • Iron is needed for the healthy growth of the fetus and placenta, especially during the second and third trimesters. It is also essential before pregnancy for the production of hemoglobin.There is no evidence that a hemoglobin level of 7 grams/100 ml or higher is detrimental to pregnancy, but it must be acknowledged that maternal hemorrhage is a major source of maternal mortality worldwide, and a reserve capacity to carry oxygen is desirable. According to the Cochrane review conclusions iron supplementation reduces the risk of maternal anaemia and iron deficiency in pregnancy but the positive effect on other maternal and infant outcomes is less clear.

Herbalism

Herbalism (also herbal medicine or phytotherapy) is the study of botany and use of plants intended for medicinal purposes or for supplementing a diet. Plants have been the basis for medical treatments through much of human history, and such traditional medicine is still widely practiced today. Modern medicine recognizes herbalism as a form of alternative medicine, as the practice of herbalism is not strictly based on evidence gathered using the scientific method. Modern medicine makes use of many plant-derived compounds as the basis for evidence-based pharmaceutical drugs. Although phytotherapy may apply modern standards of effectiveness testing to herbs and medicines derived from natural sources, few high-quality clinical trials and standards for purity or dosage exist. The scope of herbal medicine is sometimes extended to include fungal and bee products, as well as minerals, shells and certain animal parts.

Archaeological evidence indicates that the use of medicinal plants dates back to the Paleolithic age, approximately 60,000 years ago. Written evidence of herbal remedies dates back over 5,000 years, to the Sumerians, who compiled lists of plants. A number of ancient cultures wrote about plants and their medical uses in books called herbals. In ancient Egypt, herbs are mentioned in Egyptian medical papyri, depicted in tomb illustrations, or on rare occasions found in medical jars containing trace amounts of herbs. Among the oldest, lengthiest, and most important medical papyri of ancient Egypt, the Ebers Papyrus dates from about 1550 BC, and covers more than 700 drugs, mainly of plant origin. The earliest known Greek herbals come from Theophrastus of Eresos who in the 4th c. B.C. wrote in Greek Historia Plantarum, from Diocles of Carystus who wrote during the 3rd century B.C, and from Krateuas who wrote in the 1st century B.C. Only a few fragments of these works have survived intact, but from what remains scholars have noted a large amount of overlap with the Egyptian herbals. Seeds likely used for herbalism have been found in archaeological sites of Bronze Age China dating from the Shang Dynasty (c. 1600 BC–c. 1046 BC). Over a hundred of the 224 drugs mentioned in the Huangdi Neijing, an early Chinese medical text, are herbs. Herbs also commonly featured in the medicine of ancient India, where the principal treatment for diseases was diet. De Materia Medica, originally written in Greek by Pedanius Dioscorides (c. 40 – 90 AD) of Anazarbus, Cilicia, a Greek physician, pharmacologist and botanist, is a particularly important example of herbal writing; it dominated for some 1500 years until the 1600s.

Herbalists tend to use extracts from parts of plants, such as the roots or leaves but not isolate particular phytochemicals. Pharmaceutical medicine prefers single ingredients on the grounds that dosage can be more easily quantified. It is also possible to patent single compounds, and therefore generate income. Herbalists often reject the notion of a single active ingredient, arguing that the different phytochemicals present in many herbs will interact to enhance the therapeutic effects of the herb and dilute toxicity. Furthermore, they argue that a single ingredient may contribute to multiple effects. Herbalists deny that herbal synergism can be duplicated with synthetic chemicals They argue that phytochemical interactions and trace components may alter the drug response in ways that cannot currently be replicated with a combination of a few potentially active ingredients. Pharmaceutical researchers recognize the concept of drug synergism but note that clinical trials may be used to investigate the efficacy of a particular herbal preparation, provided the formulation of that herb is consistent.

In specific cases the claims of synergy and multifunctionality have been supported by science. The open question is how widely both can be generalized. Herbalists would argue that cases of synergy can be widely generalized, on the basis of their interpretation of evolutionary history, not necessarily shared by the pharmaceutical community. Plants are subject to similar selection pressures as humans and therefore they must develop resistance to threats such as radiation, reactive oxygen species and microbial attack in order to survive. Optimal chemical defenses have been selected for and have thus developed over millions of years. Human diseases are multifactorial and may be treated by consuming the chemical defences that they believe to be present in herbs. Bacteria, inflammation, nutrition and reactive oxygen species may all play a role in arterial disease. Herbalists claim a single herb may simultaneously address several of these factors. In short herbalists view their field as the study of a web of relationships rather than a quest for single cause and a single cure for a single condition.

In selecting herbal treatments herbalists may use forms of information that are not applicable to pharmacists. Because herbs can moonlight as vegetables, teas or spices they have a huge consumer base and large-scale epidemiological studies become feasible. Ethnobotanical studies are another source of information. Herbalists contend that historical medical records and herbals are underutilized resources. They favor the use of convergent information in assessing the medical value of plants. An example would be when in-vitro activity is consistent with traditional use.

Antimicrobial resistance

Antimicrobial resistance (AMR) is the ability of a microbe to resist the effects of medication previously used to treat them. This broader term also covers antibiotic resistance, which applies to bacteria and antibiotics. Resistance arises through one of three ways: natural resistance in certain types of bacteria, genetic mutation, or by one species acquiring resistance from another. Resistance can appear spontaneously because of random mutations; or more commonly following gradual buildup over time, and because of misuse of antibiotics or antimicrobials. Resistant microbes are increasingly difficult to treat, requiring alternative medications or higher doses, both of which may be more expensive or more toxic. Microbes resistant to multiple antimicrobials are called multidrug resistant (MDR); or sometimes superbugs. Antimicrobial resistance is on the rise with millions of deaths every year.

Antibiotics should only be used when needed as prescribed by health professionals. The prescriber should closely adhere to the five rights of drug administration: the right patient, the right drug, the right dose, the right route, and the right time. Narrow-spectrum antibiotics are preferred over broad-spectrum antibiotics when possible, as effectively and accurately targeting specific organisms is less likely to cause resistance. Cultures should be taken before treatment when indicated and treatment potentially changed based on the susceptibility report. For people who take these medications at home, education about proper use is essential. Health care providers can minimize spread of resistant infections by use of proper sanitation, including handwashing and disinfecting between patients, and should encourage the same of the patient, visitors, and family members.

Rising drug resistance is caused mainly by improper use of antimicrobials in humans as well as in animals, and spread of resistant strains between the two. Antibiotics increase selective pressure in bacterial populations, causing vulnerable bacteria to die; this increases the percentage of resistant bacteria which continue growing. With resistance to antibiotics becoming more common there is greater need for alternative treatments. Calls for new antibiotic therapies have been issued, but new drug development is becoming rarer.

The WHO defines antimicrobial resistance as a microorganism’s resistance to an antimicrobial drug that was once able to treat an infection by that microorganism. A person cannot become resistant to antibiotics. Resistance is a property of the microbe, not a person or other organism infected by a microbe.

Bacteria with resistance to antibiotics predate medical use of antibiotics by humans; however, widespread antibiotic use has made more bacteria resistant through the process of evolutionary pressure.

Reasons for the widespread use of antibiotics include:

  • increasing global availability over time since the 1950s
  • uncontrolled sale in many low or middle income countries, where they can be obtained over the counter without a prescription, potentially resulting in antibiotics being used when not indicated. This may result in emergence of resistance in any remaining bacteria.

Antibiotic use in livestock feed at low doses for growth promotion is an accepted practice in many industrialized countries and is known to lead to increased levels of resistance. Releasing large quantities of antibiotics into the environment during pharmaceutical manufacturing through inadequate wastewater treatment increases the risk that antibiotic-resistant strains will develop and spread. It is uncertain whether antibacterials in soaps and other products contribute to antibiotic resistance, but they are discouraged for other reasons.

Food pyramid

A food pyramid or diet pyramid is a triangular diagram representing the optimal number of servings to be eaten each day from each of the basic food groups. The first pyramid was published in Sweden in 1974. The 1992 pyramid introduced by the United States Department of Agriculture (USDA) was called the “Food Guide Pyramid”. It was updated in 2005, and then it was replaced by MyPlate in 2011.

The World Health Organization, in conjunction with the Food and Agriculture Organization, published guidelines that can effectively be represented in a food pyramid relating to objectives to prevent obesity, chronic diseases and dental caries based on meta-analysis though they represent it as a table rather than a “pyramid”. The structure is similar in some respects to the USDA food pyramid, but there are clear distinctions between types of fats, and a more dramatic distinction where carbohydrates are split on the basis of free sugars versus sugars in their natural form. Some food substances are singled out due to the impact on the target issues the “pyramid” is meant to address, while in a later revision, some recommendations are omitted since they follow automatically from other recommendations while other sub-categories are added. The reports quoted here explain that where there is no stated lower limit in the table below, there is no requirement for that nutrient in the diet.

A modified food pyramid was proposed in 1999 for adults aged over 70.

Vegetables

A vegetable is a part of a plant consumed by humans that is generally savory but is not sweet. A vegetable is not considered a grain, fruit, nut, spice, or herb. For example, the stem, root, flower, etc., may be eaten as vegetables. Vegetables contain many vitamins and minerals; however, different vegetables contain different spreads, so it is important to eat a wide variety of types. For example, green vegetables typically contain vitamin A, dark orange and dark green vegetables contain vitamin C, and vegetables like broccoli and related plants contain iron and calcium. Vegetables are very low in fats and calories, but ingredients added in preparation can often add these.

Grains

These foods provide complex carbohydrates, which are an important source of energy, especially for a low-fat meal plan. Examples include corn, wheat, and rice.

Fruits

In terms of food (rather than botany), fruits are the sweet-tasting seed-bearing parts of plants, or occasionally sweet parts of plants which do not bear seeds. These include apples, oranges, grapes, bananas, etc. Fruits are low in calories and fat and are a source of natural sugars, fiber and vitamins. Processing fruit when canning or making into juices may add sugars and remove nutrients. The fruit food group is sometimes combined with the vegetable food group. Note that a massive number of different plant species produce seed pods which are considered fruits in botany, and there are a number of botanical fruits which are conventionally not considered fruits in cuisine because they lack the characteristic sweet taste, e.g., tomatoes or avocados.

Oils and sweets

A food pyramid’s tip is the smallest part, so the fats and sweets in the top of the Food Pyramid should comprise the smallest percentage of the diet. The foods at the top of the food pyramid should be eaten sparingly because they provide calories, but not much in the way of nutrition. These foods include salad dressings, oils, cream, butter, margarine, sugars, soft drinks, candies, and sweet desserts.

Dairy

Dairy products are produced from the milk of mammals, usually but not exclusively cattle. They include milk, yogurt and cheese. Milk and its derivative products are a rich source of dietary calcium and also provide protein, phosphorus, vitamin A, and vitamin D. However, many dairy products are high in saturated fat and cholesterol compared to vegetables, fruits and whole grains, which is why skimmed products are available as an alternative. Historically, adults were recommended to consume three cups of dairy products per day. More recently, evidence is mounting that dairy products have greater levels of negative effects on health than previously thought and confer fewer benefits. For example, recent research has shown that dairy products are not related to stronger bones or less fractures.

Meat and beans

Meat is the tissue – usually muscle – of an animal consumed by humans. Since most parts of many animals are edible, there is a vast variety of meats. Meat is a major source of protein, as well as iron, zinc, and vitamin B12. Meats, poultry, and fish include beef, chicken, pork, salmon, tuna, shrimp, and eggs.

The meat group is one of the major compacted food groups in the food guide pyramid. Many of the same nutrients found in meat can also be found in foods like eggs, dry beans, and nuts, such foods are typically placed in the same category as meats, as meat alternatives. These include tofu, products that resemble meat or fish but are made with soy, eggs, and cheeses. For those who do not consume meat or animal products (see Vegetarianism, veganism and Taboo food and drink), meat analogs, tofu, beans, lentils, chick peas, nuts and other high-protein vegetables are also included in this group. The food guide pyramid suggests that adults eat 2–3 servings per day. One serving of meat is 4 oz (110 g), about the size of a deck of cards.

Vitamin

A vitamin is an organic compound and a vital nutrient that an organism requires in limited amounts. An organic chemical compound (or related set of compounds) is called a vitamin when the organism cannot synthesize the compound in sufficient quantities, and it must be obtained through the diet; thus, the term vitamin is conditional upon the circumstances and the particular organism. For example, ascorbic acid (one form of vitamin C) is a vitamin for humans, but not for most other animal organisms. Supplementation is important for the treatment of certain health problems, but there is little evidence of nutritional benefit when used by otherwise healthy people.

By convention the term vitamin does not include other essential nutrients, such as dietary minerals, essential fatty acids, essential amino acids (which are needed in greater amounts than vitamins) or the many other nutrients that promote health, and are required less often to maintain the health of the organism. Thirteen vitamins are universally recognized at present. Vitamins are classified by their biological and chemical activity, not their structure. Thus, each vitamin refers to a number of vitamer compounds that all show the biological activity associated with a particular vitamin. Such a set of chemicals is grouped under an alphabetized vitamin “generic descriptor” title, such as “vitamin A”, which includes the compounds retinal, retinol, and four known carotenoids. Vitamers by definition are convertible to the active form of the vitamin in the body, and are sometimes inter-convertible to one another, as well.

Vitamins have diverse biochemical functions. Some, such as vitamin D, have hormone-like functions as regulators of mineral metabolism, or regulators of cell and tissue growth and differentiation (such as some forms of vitamin A). Others function as antioxidants (e.g., vitamin E and sometimes vitamin C). The largest number of vitamins, the B complex vitamins, function as enzyme cofactors (coenzymes) or the precursors for them; coenzymes help enzymes in their work as catalysts in metabolism. In this role, vitamins may be tightly bound to enzymes as part of prosthetic groups: For example, biotin is part of enzymes involved in making fatty acids. They may also be less tightly bound to enzyme catalysts as coenzymes, detachable molecules that function to carry chemical groups or electrons between molecules. For example, folic acid may carry methyl, formyl, and methylene groups in the cell. Although these roles in assisting enzyme-substrate reactions are vitamins’ best-known function, the other vitamin functions are equally important.

In those who are otherwise healthy, there is little evidence that supplements have any benefits with respect to cancer or heart disease. Vitamin A and E supplements not only provide no health benefits for generally healthy individuals, but they may increase mortality, though the two large studies that support this conclusion included smokers for whom it was already known that beta-carotene supplements can be harmful. While other findings suggest that vitamin E toxicity is limited to only a specific form when taken in excess.

The European Union and other countries of Europe have regulations that define limits of vitamin (and mineral) dosages for their safe use as food supplements. Most vitamins that are sold as food supplements cannot exceed a maximum daily dosage. Vitamin products above these legal limits are not considered food supplements and must be registered as prescription or non-prescription (over-the-counter drugs) due to their potential side effects. As a result, most of the fat-soluble vitamins (such as the vitamins A, D, E, and K) that contain amounts above the daily allowance are drug products. The daily dosage of a vitamin supplement for example cannot exceed 300% of the recommended daily allowance, and for vitamin A, this limit is even lower (200%). Such regulations are applicable in most European countries.

500 mg calcium supplement tablets, with vitamin D, made from calcium carbonate, maltodextrin, mineral oil, hypromellose, glycerin, cholecalciferol, polyethylene glycol, and carnauba wax.

Dietary supplements often contain vitamins, but may also include other ingredients, such as minerals, herbs, and botanicals. Scientific evidence supports the benefits of dietary supplements for persons with certain health conditions. In some cases, vitamin supplements may have unwanted effects, especially if taken before surgery, with other dietary supplements or medicines, or if the person taking them has certain health conditions. They may also contain levels of vitamins many times higher, and in different forms, than one may ingest through food.

Rubella

Rubella, also known as German measles or three-day measles, is an infection caused by the rubella virus. This disease is often mild with half of people not realizing that they are sick. A rash may start around two weeks after exposure and last for three days. It usually starts on the face and spreads to the rest of the body. The rash is not as bright as that of measles and is sometimes itchy. Swollen lymph nodes are common and may last a few weeks. A fever, sore throat, and fatigue may also occur. In adults joint pain is common. Complications may include bleeding problems, testicular swelling, and inflammation of nerves. Infection during early pregnancy may result in a child born with congenital rubella syndrome (CRS) or miscarriage. Symptoms of CRS include problems with the eyes such as cataracts, ears such as deafness, heart, and brain. Problems are rare after the 20th week of pregnancy.

Rubella is usually spread through the air via coughs of people who are infected. People are infectious during the week before and after the appearance of the rash. Babies with CRS may spread the virus for more than a year. Only humans are infected. Insects do not spread the disease. Once recovered, people are immune to future infections. Testing is available that can verify immunity. Diagnosis is confirmed by finding the virus in the blood, throat, or urine. Testing the blood for antibodiesmay also be useful.

Rubella is preventable with the rubella vaccine with a single dose being more than 95% effective. Often it is given in combination with the measles vaccine and mumps vaccine, known as the MMR vaccine. With a population vaccination rate of less than 80%, however, more women might make it to childbearing age without developing immunity and issues could increase. Once infected there is no specific treatment.

Rubella is a common infection in many areas of the world. Each year about 100,000 cases of congenital rubella syndrome occur. Rates of disease have decreased in many areas as a result of vaccination. There are ongoing efforts to eliminate the disease globally. In April 2015 the World Health Organization declared the Americas free of rubella transmission. The name “rubella” is from Latin and means little red. It was first described as a separate disease by German physicians in 1814 resulting in the name “German measles.”

The disease is caused by rubella virus, a togavirus that is enveloped and has a single-stranded RNA genome. The virus is transmitted by the respiratory route and replicates in the nasopharynx and lymph nodes. The virus is found in the blood 5 to 7 days after infection and spreads throughout the body. The virus has teratogenic properties and is capable of crossing the placenta and infecting the fetus where it stops cells from developing or destroys them. During this incubation period, the patient is contagious typically for about one week before he/she develops a rash and for about one week thereafter.

Increased susceptibility to infection might be inherited as there is some indication that HLA-A1 or factors surrounding A1 on extended haplotypes are involved in virus infection or non-resolution of the disease.

Rubella virus specific IgM antibodies are present in people recently infected by rubella virus, but these antibodies can persist for over a year, and a positive test result needs to be interpreted with caution. The presence of these antibodies along with, or a short time after, the characteristic rash confirms the diagnosis.

Rubella infections are prevented by active immunisation programs using live, disabled virus vaccines. Two live attenuated virus vaccines, RA 27/3 and Cendehill strains, were effective in the prevention of adult disease. However their use in prepubertal females did not produce a significant fall in the overall incidence rate of CRS in the UK. Reductions were only achieved by immunisation of all children.[citation needed]

The vaccine is now usually given as part of the MMR vaccine. The WHO recommends the first dose be given at 12 to 18 months of age with a second dose at 36 months. Pregnant women are usually tested for immunity to rubella early on. Women found to be susceptible are not vaccinated until after the baby is born because the vaccine contains live virus.[24]

The immunisation program has been quite successful. Cuba declared the disease eliminated in the 1990s, and in 2004 the Centers for Disease Control and Prevention announced that both the congenital and acquired forms of rubella had been eliminated from the United States.[25][26]

Screening for rubella susceptibility by history of vaccination or by serology is recommended in the United States for all women of childbearing age at their first preconception counseling visit to reduce incidence of congenital rubella syndrome (CRS). It is recommended that all susceptible non-pregnant women of childbearing age should be offered rubella vaccination. Due to concerns about possible teratogenicity, use of MMR vaccine is not recommended during pregnancy. Instead, susceptible pregnant women should be vaccinated as soon as possible in the postpartum period.

There is no specific treatment for rubella; however, management is a matter of responding to symptoms to diminish discomfort. Treatment of newborn babies is focused on management of the complications. Congenital heart defects and cataracts can be corrected by direct surgery.

Management for ocular congenital rubella syndrome (CRS) is similar to that for age-related macular degeneration, including counseling, regular monitoring, and the provision of low vision devices, if required.

Rubella infection of children and adults is usually mild, self-limiting and often asymptomatic. The prognosis in children born with CRS is poor. Rubella is a disease that occurs worldwide. The virus tends to peak during the spring in countries with temperate climates. Before the vaccine to rubella was introduced in 1969, widespread outbreaks usually occurred every 6–9 years in the United States and 3–5 years in Europe, mostly affecting children in the 5-9 year old age group. Since the introduction of vaccine, occurrences have become rare in those countries with high uptake rates.

Vaccination has interrupted the transmission of rubella in the Americas: no endemic case has been observed since February 2009. Since the virus can always be reintroduced from other continents, the population still need to remain vaccinated to keep the western hemisphere free of rubella. During the epidemic in the U.S. between 1962–1965, rubella virus infections during pregnancy were estimated to have caused 30,000 stillbirths and 20,000 children to be born impaired or disabled as a result of CRS. Universal immunisation producing a high level of herd immunity is important in the control of epidemics of rubella.

Stroke

Stroke is a medical condition in which poor blood flow to the brain results in cell death. There are two main types of stroke: ischemic, due to lack of blood flow, and hemorrhagic, due to bleeding. They result in part of the brain not functioning properly. Signs and symptoms of a stroke may include an inability to move or feel on one side of the body, problems understanding or speaking, feeling like the world is spinning, or loss of vision to one side. Signs and symptoms often appear soon after the stroke has occurred. If symptoms last less than one or two hours it is known as a transient ischemic attack (TIA) or mini-stroke. A hemorrhagic stroke may also be associated with a severe headache. The symptoms of a stroke can be permanent. Long-term complications may include pneumonia or loss of bladder control.

The main risk factor for stroke is high blood pressure. Other risk factors include tobacco smoking, obesity, high blood cholesterol, diabetes mellitus, previous TIA, and atrial fibrillation. An ischemic stroke is typically caused by blockage of a blood vessel, though there are also less common causes. A hemorrhagic stroke is caused by either bleeding directly into the brain or into the space between the brain’s membranes. Bleeding may occur due to a ruptured brain aneurysm. Diagnosis is typically with medical imaging such as a CT scan or magnetic resonance imaging (MRI) scan along with a physical exam. Other tests such as an electrocardiogram (ECG) and blood tests are done to determine risk factors and rule out other possible causes. Low blood sugar may cause similar symptoms.

Prevention includes decreasing risk factors, as well as possibly aspirin, statins, surgery to open up the arteries to the brain in those with problematic narrowing, and warfarin in those with atrial fibrillation. A stroke or TIA often requires emergency care. An ischemic stroke, if detected within three to four and half hours, may be treatable with a medication that can break down the clot. Aspirin should be used. Some hemorrhagic strokes benefit from surgery. Treatment to try to recover lost function is called stroke rehabilitation and ideally takes place in a stroke unit; however, these are not available in much of the world.

Strokes can be classified into two major categories: ischemic and hemorrhagic. Ischemic strokes are caused by interruption of the blood supply to the brain, while hemorrhagic strokes result from the rupture of a blood vessel or an abnormal vascular structure. About 87% of strokes are ischemic, the rest being hemorrhagic. Bleeding can develop inside areas of ischemia, a condition known as “hemorrhagic transformation.” It is unknown how many hemorrhagic strokes actually start as ischemic strokes.

Subtypes

If the area of the brain affected contains one of the three prominent central nervous system pathways—the spinothalamic tract, corticospinal tract, and dorsal column (medial lemniscus), symptoms may include:

  • hemiplegia and muscle weakness of the face
  • numbness
  • reduction in sensory or vibratory sensation
  • initial flaccidity (reduced muscle tone), replaced by spasticity (increased muscle tone), excessive reflexes, and obligatory synergies.

In most cases, the symptoms affect only one side of the body (unilateral). Depending on the part of the brain affected, the defect in the brain is usually on the opposite side of the body. However, since these pathways also travel in the spinal cord and any lesion there can also produce these symptoms, the presence of any one of these symptoms does not necessarily indicate a stroke.In addition to the above CNS pathways, the brainstem gives rise to most of the twelve cranial nerves. A brainstem stroke affecting the brainstem and brain, therefore, can produce symptoms relating to deficits in these cranial nerves:

  • altered smell, taste, hearing, or vision (total or partial)
  • drooping of eyelid (ptosis) and weakness of ocular muscles
  • decreased reflexes: gag, swallow, pupil reactivity to light
  • decreased sensation and muscle weakness of the face
  • balance problems and nystagmus
  • altered breathing and heart rate
  • weakness in sternocleidomastoid muscle with inability to turn head to one side
  • weakness in tongue (inability to stick out the tongue or move it from side to side)

Aerobic exercise

Aerobic exercise  is physical exercise of low to high intensity that depends primarily on the aerobic energy-generating process. Aerobic literally means “relating to, involving, or requiring free oxygen”, and refers to the use of oxygen to adequately meet energy demands during exercise via aerobic metabolism. Generally, light-to-moderate intensity activities that are sufficiently supported by aerobic metabolism can be performed for extended periods of time.

When practiced in this way, examples of cardiovascular/aerobic exercise are medium to long distance running/jogging, swimming, cycling, and walking, according to the first extensive research on aerobic exercise, conducted in the 1960s on over 5,000 U.S. Air Force personnel by Dr. Kenneth H. Cooper.

Kenneth Cooper was the first person to introduce the concept of aerobic exercise. In the 1960s, Cooper started research into preventive medicine. He became intrigued by the belief that exercise can preserve one’s health. In 1970 he created his own institute (the Cooper Institute) for non-profit research and education devoted to preventive medicine. He sparked millions into becoming active and is now known as the “father of aerobics”.

Benefits:

Among the recognized benefits of doing regular aerobic exercise are:

  • Strengthening the muscles involved in respiration, to facilitate the flow of air in and out of the lungs
  • Strengthening and enlarging the heart muscle, to improve its pumping efficiency and reduce the resting heart rate, known as aerobic conditioning
  • Improving circulation efficiency and reducing blood pressure
  • Increasing the total number of red blood cells in the body, facilitating transport of oxygen
  • Improved mental health, including reducing stress and lowering the incidence of depression, as well as increased cognitive capacity.
  • Reducing the risk for diabetes. One meta-analysis has shown, from multiple conducted studies, that aerobic exercise does help lower Hb A1Clevels for type 2 diabetics.

As a result, aerobic exercise can reduce the risk of death due to cardiovascular problems. In addition, high-impact aerobic activities (such as jogging or using a skipping rope) can stimulate bone growth, as well as reduce the risk of osteoporosis for both men and women.

In addition to the health benefits of aerobic exercise, there are numerous performance benefits:

  • Increased storage of energy molecules such as fats and carbohydrates within the muscles, allowing for increased endurance
  • Neovascularization of the muscle sarcomeres to increase blood flow through the muscles
  • Increasing speed at which aerobic metabolism is activated within muscles, allowing a greater portion of energy for intense exercise to be generated aerobically
  • Improving the ability of muscles to use fats during exercise, preserving intramuscular glycogen
  • Enhancing the speed at which muscles recover from high intensity exercise
  • Neurobiological effects: improvements in brain structural connections and increased gray matter density, new neuron growth, improved cognitive function (cognitive control and various forms of memory), and improvement or maintenance of mental health

Some drawbacks of aerobic exercise include:

  • Overuse injuries because of repetitive, high-impact exercise such as distance running.
  • Is not an effective approach to building muscle.
  • Only effective for fat loss when used consistently.

Both the health benefits and the performance benefits, or “training effect”, require a minimum duration and frequency of exercise. Most authorities suggest at least twenty minutes performed at least three times per week.

Cooper himself defines aerobic exercise as the ability to utilise the maximum amount of oxygen during exhaustive work. Cooper describes some of the major health benefits of aerobic exercise, such as gaining more efficient lungs by maximising breathing capacity, thereby increasing ability to ventilate more air in a shorter period of time. As breathing capacity increases, one is able to extract oxygen more quickly into the blood stream, increasing elimination of carbon dioxide. With aerobic exercise the heart becomes more efficient at functioning, and blood volume, hemoglobin and red blood cells increase, enhancing the ability of the body to transport oxygen from the lungs into the blood and muscles. Metabolism will change and enable consumption of more calories without putting on weight. Aerobic exercise can delay osteoporosis as there is an increase in muscle mass, a loss of fat and an increase in bone density. With these variables increasing, there is a decrease in likelihood of diabetes as muscles use sugars better than fat. One of the major benefits of aerobic exercise is that body weight may decrease slowly; it will only decrease at a rapid pace if there is a calorie restriction, therefore reducing obesity rates.