Electric Field Array Micro-System Lab-On-Chip and Biomedical Analysis

The micro-system system is wholly automated, and it is used in biomedical functions. This system entails two core parts:

  1. ACMOS Integrated Circuit, which entails the detecting and triggering sections.
  2. A control circuit board, which involves the intensification and the conditioning sections. This system is an appropriate entrant for biomedical purposes like noninvasive cell exposure, cancer sensation, and selection of antibodies (Gallab 1027).

DeFET’s principle of working

DeFET’s principle of working

DeFET contains two FETs, which include the P-type and the N-type. A cross-coupling occurs between the two drains of the FETs and the gates are floated and connected to one other. When E is applied over the DeFET sensor, then E=Vin1-Vin2/d. The differential voltage Vdiff is equal to the product of the applied E and the distance between the split gates (d) Viff=Vin-Vin2=Ed. Also, IL= Ipmos- Inmos, where IL is the load current, Ipmos and Inmos is the drain currents from their respective FETS (Gallab 1033) (Gallab1035).

Sensing

When E is produced, then the applied E is a function of d. E can be detected using DeFET by measuring Vout, which is the intensity of the applied E (Gallab1028).

Actuation

It is made up of four electrodes:

  1. The electrodes maintain passive stable particle levitation.
  2. They give a strong E force so that small particles can be levitated (Gallab1030).

Actuation

Multiplexer

It is a 16*1 analog multiplexer used to multiplex 16 DeFET sensors outputs. It consists of transmission gates and four control terminals. A certain output value of DeFET can be obtained by applying the equivalent combination of four digital control signals. Additionally, by applying these four control terminals of the multiplexer with a 4-bit counter output, all 16 DeFETs can be scanned continuously, which is a crucial characteristic feature for the applications(Gallab 1034).

Inpucts for Control

Fully automated CMOS micro-system board

It is made up of

  1. An oscillator, which, produces a square waveform with frequencies ranging from 1MHz to 10 MHz.
  2. A filter: to filter the output of the oscillator.
  3. Phase shifter: used for inverting the input at a gain of -1 and below.
  4. Selection part: It consists of 8 switches. It obtains its inputs from the phase shifter and the filter, and it can spread these signals into the inputs of the quadruple electrodes.
  5. Decoder: it a 4- bit counter, which, obtains its input from the clock at a given frequency. It controls the multiplexer’s nonstop scanning on the 16 DeFET detectors.
  6. Level shifter: it varies the output of the counter to obtain1.8V for the multiplexer. Lab-View: it is a visual programming language development (Gallab 1035).

Fully automated CMOS micro-system boardFully automated CMOS micro-system board

Electrical Testing

The micro-system is tested at various frequencies like 1MHZ, 2.8MHz, and 8.6MHz. Fig. 6 below shows the output of various sections of the board at 1MHz (Gallab 1036).

Electrical Testing

Biomedical Testing

The system is tested at 819.7 kHz with Hemolymph, which is obtained from the snails. Gallab1037).

Conclusion

The LOC can actuate the electric field E and sense it in real-time, which is an important merit in the LOC prototype. The micro-system can be used in the biomedical lab- on- chips, environmental monitoring by sensing particles in the air, and detection of scratches.

Works Cited

Gallab, Ballawy (2006) “A novel electric field sensor for lab-on-chip for biomedical applications”: IEEE Journal of sensors 6.4 (2006):1027-1037. Print.

Nanotechnology and Bio-Electrospray: In the Context of Biomedical Applications

Introduction

Nanotechnology is the key to developing and creating structures that are so tiny that it is even impossible to view these things under a microscope.

This technology allows for the creation of nanofibers and other nano-sized materials. It has been well-documented that certain industries already benefited from the use of nanotechnology especially when it comes to the creation of nano-sized and yet greatly enhanced materials. It did not take long before scientists working in the field of biomedicine began to study nanotechnology, especially when it comes to the delivery of drugs and the construction of biological microenvironments (Yi et al.,p.189).

In recent years one of the most promising applications is the use of electrospraying to manipulate cells and transport them without compromising the integrity of the cell. A team of researchers discovered that bio-electrospraying is a viable method of transporting stem cells to engraft and repair an affected organ.

Nanotechnology

Before going any further it is important to have a clear grasp of the type of measurements that are involved when discussing nanotechnology. There are different opinions as to how one can accurately measure particles that are below the micro-level. However, most will agree that “you can think of nanotechnology dealing with anything measuring between 1 and 100 nm… larger than that is the microscale, and smaller than that is the atomic scale” (Bonsor & Strickland, p.2). Thus, it can be said that when people talk about nanotechnology then they refer to measurements that are smaller than those that can be found in the microscale.

One can just imagine the extreme minuteness of the particles and materials that belong to the world of nanotechnology. However, there is only one way to appreciate the importance of this technology by examining the products that scientists were able to produce using this technological framework.

The most significant contribution can be seen in the field of engineering as revealed in the following statement: “Engineers are trying to use nano-size wires to create smaller, more powerful microprocessors” (Bonsor & Strickland, p.2). Nanowires are wires with a diameter of 1 nm and as a result, these things can be utilized to build very tiny transistors for computer chips (Bonsor & Strickland, p.2). The implication is that computers can be very compact and yet very powerful at the same time.

There are also carbon nanotubes. Using nanotechnology scientists were able to line up carbons into sheets and afterward, they were able to roll these sheets. One byproduct of this process is a material that is hundreds of times stronger than steel and yet many times lighter (Bonsor & Strickland, p.3). These are exciting discoveries however, these are just the tip of the iceberg, so to speak.

Another major benefit of nanotechnology is seen in the biomedical field. It can be used in the delivery of drugs and nucleic acids. In the pharmaceutical world, this is called the drug/gene delivery system or DGDS (Guan et al., p.115). Nanotechnology can be used to create nano-sized structures that can carry a particular drug or nucleic acid to its particular target. The conventional means of drug delivery is through

the form of free, unassociated molecules but health experts discovered that “This strategy is simple, but it is becoming increasingly limited in many conditions, especially in those involving the use of highly cytotoxic chemotherapeutics and environmentally sensitive biopharmaceutics” ( Guan et al., p.116). This simply means that the potency of the drug is undermined because it is unable to reach its target before being degraded by the environment.

Cell Electrospinning and Electrosprays

It has been proposed that electrospraying could be used in tandem with nanotechnology. This can be applied in biomedicine wherein medical experts can rebuild organic tissues in the same way that electrospinning is used to create microfibers. This idea is a result of rapid advances in the related field of electrospraying.

As early as 1934 a process was patented “wherein an experimental setup was outlined for the production of polymer filaments using electrostatic force… a high voltage is used to create an electrically charged jet of polymer solution or melt, which dries or solidifies to leave a polymer fiber” (Virginia Tech, p.1). This is why they can create microfibers from a polymer solution. If the same principle is used in medicine to create organic structures instead of microfibers the process is called cell electrospinning (Bartolovic, p.157).

Another technique in the creation of fine particles is the use of electrical force to generate a fine liquid aerosol. The conventional way of creating fine liquid aerosol is through pneumatic methods as gas forces a liquid solution out of the container so that it will effuse out of a nozzle.

Applying nanotechnology there is now a better way of doing that and it is by charing the liquid to a very high voltage as a result, “The charged liquid in the nozzle becomes unstable it is forced to hold more and more charge… soon the liquid reaches a critical point, at which it can hold no more electrical charge and at the tip of the nozzle it blows apart into a cloud of tiny, highly charged droplets” (New Objective, Inc., p.1).

Scientists are saying that if one will combine the different attributes of nanotechnology and electrospraying and use them in biomedicine, the result would be bio-electrospraying, a novel way of creating organic structures or biological microenvironments.

Bio-electrosprays

The concept is simple to understand but the byproduct of the bio-electrosprays is unknown. Using principles of electrospraying will reveal that bio-electrospraying utilizes electrical force to move cells out of prepared solution and into a specific target. This requires manipulating the cells and to some extent exposing them to a level of stress that may damage the cells and therefore prevent it from accomplishing their purpose. This is the first thing that comes to mind considering that cells are fragile organic structures that require an optimized environment to function and survive.

An experiment has to be conducted to determine if a cell can be manipulated to such an extent that it will be transported from a solution and targeted to a specific location in the body of a recipient and yet maintain its integrity. As mentioned earlier organic matter and living organisms are very sensitive to changes in the external environment. There is no need to elaborate on the fact that for bio-electrospraying to work the cells must not only be manipulated but also had to be charged using electrical force.

To answer these questions, a group of scientists devised a study wherein they can determine beyond doubt if indeed a cell that passes through a bio-electrospray system can be transported into a host tissue without adverse effects to the cell itself and the recipient. This was achieved by using stem cells as opposed to ordinary types of cells.

Furthermore, the said research team decided to use stem cells harvested from mice. At the same time, they flushed out bone marrow from mouse legs. The control group will be given a solution containing stem cells via ordinary methods of transplantation while the experimental group received stem cells via bio-electrospraying.

The team wanted to know if the stem cells that passed through the bio-electrospray equipment were not damaged considering the conditions the stem cells have to endure to move from the container to the leg of the mouse. Using Trypan blue staining techniques the research team discovered that there was no significant difference between the stem cells that came out of the bio-electrospray equipment and the stem cells that will be directly applied to the mouse leg in the control group.

It has to be pointed out that the mice went through an irradiation process that eliminated their bone marrow. If the stem cells that came from bio-electrospraying were altered or negatively affected by the process then this means that the stem cells will not function as expected. Thus, a short while after bio-electrospraying, the mice are expected to die because this means that the stem cells failed to engraft and repopulate the blood system with much-needed blood cells.

When the procedure was completed the research team discovered that there were no significant differences between bio-electrosprayed cells and controls and the results were even comparable to cells taken from untreated wild-type mice (Bartolovich et al., p.162).

The research team also conducted other tests such as the examination of the levels of myeloid cells, B cells, and T cells and they explained that this will indicate whether the stem cells had been affected by the jetting procedure as any damage to the cells could alter their homing, engraftment or differentiation potentials” (Bartolovic et al., p.163).

The team reported, “There is no significant difference between the proportion of myeloid cells, B cells and T cells in control mice (CC) and the recipients of cells subjected to bio-electrospraying (BES) for nay cell type examined in the peripheral blood, bone marrow or spleen” (Bartolovic et al., p.163). This is proof that bio-electrospraying has tremendous potential when it comes to biomedicine.

This is an important breakthrough because beforehand there were lingering questions with regards to the viability of bio-electrospraying in biomedicine. It is one thing to release fine particles using a solution full of chemicals and quite another to create fine mists containing live cells.

The use of the stem cells is also very crucial because it did not only show the safety of bio-electrospraying but also if the technique can negatively alter the structure of the cells to the extent that it can no longer function as expected. But in the said experiment the research team was able to demonstrate that the stem cells were was still capable of saving the life of a mouse. This is because the stem cells from BES were able to engraft and repopulate.

Conclusion

Nanotechnology has come a long way from developing microfibers and nanowires used in engineering and other industrial applications. In the 21st-century scientists are working towards the utilization of nanotechnology and applying it in the field of biomedicine. There is a consensus that to create nano-sized structures for DGDS purposes there is also a need to discover a system of delivery that will allow for pinpoint accuracy but at the same time ensure that the chemical or organic materials are not harmed or altered in the process.

This means that scientists had to adopt the use of electrospraying and transform it into bio-electrospraying. Researchers were able to demonstrate that bio-electrospraying is a viable tool to deliver cells because in one particular experiment stem cells were jetted from a charged needle and yet it did not negatively affect its structure and functions when applied to a dying animal.

This is a breakthrough because it can be used as a basis to explore other forms of the application be it in tissue grafting or the creation of biological microenvironments.

This means that instead of the use of a scalpel to remove skin tissue from a donor and graft it into the recipient, the use of bio-electrospraying can achieve the same result with pinpoint accuracy. However, there is much work to be done because it is not yet clear how to manipulate stem cells to create tissues and organs. Nevertheless, the result of this experiment is a step in the right direction.

Works Cited

Bartolovic, Kerol et al. “The differentiation and engraftment potential of mouse hematopoietic stem cells is maintained after bio-electrospray.” Analyst (2010): 157-164.

Bonsor, Kevin & Jonathan Strickland. “How Stuff Works. 1998. Web.

Guan, Jingjiao et al. “Polymeric nanoparticles and nanopore membranes for controlled drug and gene delivery.” Biomedical Nanostructures. (2008): 115-137. New Objective Inc. What is electrospray? Web.

Virginia Tech. 2011. Electrospinning. Web.

Yi, Allen et al. “Overview of Polymer micro/nanomanufacturing for biomedical applications.” Advances in Polymer Technology. 27.4 (2008): 188-198.

Imaging Speed in Biomedical Engineering

Introduction

Biomedical engineering has of late been widely acknowledged and applied across the major facets of life like in industries and medicine research; these applications have been majorly appreciated in the medical fraternity. Improved diagnosis of some diseases which could not have been effectively achieved is on the verge of attaining full support from the application of diagnostic techniques like wide field confocal microscopy, full field optical coherent tomography and ultrasound using 2-D phased-array transducer. These methods share amongst other many features the ability to improve image resolution by ensuring the right speed is put under utilization. It is projected that sooner or later these methods are going to replace the conventional methods that are currently in use. However these novel technological advances also comes with major disadvantages that limits their large scale application, for instance they are all very expensive, they require a skillful personnel to operate which would require additional specialized training, they are also complex and hard to maintain, this paper will look at the basic principles applied by each of these techniques to achieve their functions, compare their advantages and disadvantages and finally look at the possible areas where the improved speed is applied in real life.

Wide Field Confocal Microscopy Using Structured Illumination Method

Amongst other important features, this microscope can utilize a high speed tandem Scanning reflected light microscopy where a stabilized configured objective is employed to attain a required high speed (Watson 1994, p. 168). A substitute for this in achieve the high speed requirement involves the use of a video rate laser scanner which would have an added advantage of being able to control the size of the aperture and the laser intensity.

This improved speed coupled with the ability to eliminate any background information from the plane of focus and therefore leading to reduced image degradation is a key feature that improves the quality of image (Langhorst et al. 2009, p. 858; Gustafsson et al. 2008, p. 4960). There are other features like serial collection of specimens that are optic tied to this technique. When using this microscopy, it is possible to quantify the depth for it adds a third dimension when it images only one plane within the sample at a time (Liang et al. 1997, p. 751).

Your account has been disabled for not following our terms. You won't be able to log into this account and no one else will be able to see it. Some of the things we don't allow include artificially collecting likes, followers or shares, posting repetitive content or repeatedly contacting people for commercial purposes without their consent. If you think this was a mistake, please let us know.

From the above schematic representation, one can deduce the main principle and how the technology achieves its function, the laser system or the excitation source emits coherent light which is directed to pass via an aperture. The aperture would be found situated on a conjugate plane, detector (a photomultiplier tube) front will act as the second position of the pin hole aperture for scanning (Toomre and Pawley 2006, p. 222).

This laser would be reflected from a dichromatic mirror, for scanning the biological sample under test, and this would be found aligned in a certain focal plane. In this same focal plane on the sample, the secondary fluorescence would be emitted and pass via a dichromatic mirror, these would then be focused on the detector aperture as confocal points (Liang et al. 1997, p. 751).

Advantages

A key advantage is that the technology utilizes optical sectioning; this would help in removing or getting rid of the artifacts which usually occurs when physically sectioned or when the specimen to be processed is also physically stained with fluorescent stains as in the case of traditional microscopy. Not only is the clarity enhanced in this non-invasive optical technology, but it also useful when examining either a living or a fixed specimen over a wide field of conditions (Gustafsson et al. 2008, p. 4962).

Disadvantages

When using a high speed setting, capturing and data recording becomes very difficult unless the data is transferred to a video microscope, analogue videos obtained are hard to handle when digitalizing before processing as well (Watson 1994, p. 172).

Resolution is limited by the component numerical aperture, both incident and detected light wavelengths (Toomre and Pawley 2006, p. 222). It is genuine to state that high speed images obtained cannot be represented effectively either in paper nor photographs (Watson 1994, p. 172).

Applications

The main feature is that, high speed imaging achieved by using TSM has been applied in ophthalmic imaging where speed is of essence since the retina and cornea keeps moving in vivo. Use of high speed stabilized configured objective has been applied in dentistry where real-time teeth examination is required to observe not only the remaining cracks, dentine fluid flow and cutting of hard tissues but also the interfacial regions of adhesive restoration this has helped in comprehending how these interfaces responds to stress factors (Watson 1994, p. 171; Auran et al. 1994, p. 184). It has also enabled reconstruction of the slices gotten to give a three dimensional (3D) view which would enable easy analysis of the sample volume (Hanley and Jovin 2001, p. 1115). Another importance is that any one slice is crisper and clearer than a full field fluorescence image (Toomre and Pawley 2006, p. 221). he speed factor has also been applied in ophthalmology to detect changes in optical features of human eye, speed is crucial here because human eye cannot remain static in vivo hence the time factor of examination must be very short

Full Field Optical Coherent Tomography (OCT) Imaging

This is also referred to as parallel optical coherent tomography (OCT). OCT is one of the advances in technology employed to give very high image resolutions when used to take a cross-sectional tomography imaging of inter tissues from biological samples. The devise employs the usage of an image sensor camera referred to as charged coupled device or simply CCD, which acquires the image, though it is also possible to use complementary metal oxide semiconductor sensors. The 2D target sample to be examined is usually full-field illuminated, then these lights are simultaneously collected from all the pixels, after which it is imaged with the camera to produce an image in what is known as en face orientation (Bouma and Tearney 2002, p. 6983). This orientation would be orthogonal with the optical axis and this occurs without light beam scanning. This application is believed to eliminate the possibilities of occurrence of electromechanical lateral scans (Bordenave et al. 2002, p. 2060). It is possible as well to obtain a reconstructed three dimensional representation by simply stepping the reference mirror and recording the occurrence of successive en face images. The image size is not a serious problem to full field OCT because it has a parallel processing property (Shoude et al. 2008, p. 10).

This technology has been widely applied in medical imaging settings; it has been used where high resolution images are desired to achieve maximum therapeutic approach (Fercher 1996, p. 158). Major it has been applied in ophthalmology to visualize highly resolved retina images and other segments of the eye (Hoerauf and Bimgruber 2002, p. 487). It has also been used in industries where non destructive testing is desired.

Principle of operation

The principle applied is based on the interference occurring between the light originating from the source to the test object with the light reflected back by this object through a beam splitter or a semi-transmissive mirror. This depends on phase shifting method used in extracting interference fringe envelop within the white light microscopy (Bouma and Teary 2002, p. 67). The fringe envelop would be obtained when several interferometric images are combined once the introduction of discrete or continuous temporal phase shift. Phase stepping method would involve using a known stepping amount when measuring the intensity. The inertia force would limit the operation speed if mechanical displacement was used to induce the phase shift (Bouma and Teary 2002, p. 69).

Its long measurement time limits the possible application of this machine in studying the static objects. The system employs the use of integrating bucket technique; this involves the integration of intensity as the phase is shifted continuously. Shifting would occur in a linear saw-tooth like method, this would enable recording of several buckets or integrated values of intensity. Integrating technique is better than phase stepping because it allows faster operation. An increase in speed would be realized in this case.

Some researchers have come up with findings to improve the speed upon which this machine can be applied, for instance Dubois and Bocarra (2009, p. 23) developed a method that could extract the fringe envelop more rapidly compared to those found in conventional methods. This is because increased speed is paramount when taking measurements from biological materials that do not remain static over longer periods of measurement. They used a sinusoidal phase modulation method where the reference mirror was made to oscillate together with four integrating buckets (Dubois and Boccarra 2009, p. 23).

Application

This method can be applied in industries where quality assurance and quality control are desired since the method gives cross sectional images from the biological materials or any other samples under test without damaging or touching it. It can also be applied in monitoring continuous productions and assembly in a manufacturing plant.

Experimental arrangement
Full field OCT technique; Experimental arrangement

The equation below summarizes the axial resolution which depends on the coherent length of the illumination process; this is inversely proportional to the spectral bandwidth;

Formula

Where Formula represents the media refractive index, Formula represents the centre wavelength and Formula represents the spectral width (Dubois et al. 2004, p. 2876).

Ultrasound Using 2-D Phased-Array Transducer

The rate at which data is acquired by this apparatus will depend on the receive mode parallel processing in both steering dimensions of Ө and Ф. Alternatively, addition of ratio delay lines to the principle electronic will eventually add the steering angle of the receive mode (von Ramm and Stephen 1990, p. 262) and increase the rate of data acquisition. These elements are usually arranged in a certain pattern, for instance in a linear array (Turnbull and Foste 1992, p. 344). These elements can be pulsed either singly or together so as to realize a pattern of wavefront; these wavefront will interfere before eventually giving rise to a common beam profile (Saleh and Smith 2004, p. 9). This profile could be varied by varying wave height as well as the duration each element takes to be excited within the beam profile. One major attribute is that there is a software control of the amplitude; this is coupled with a time delay for each of the elements, in achieving this, focal law will play a major role (Ditchburn and Ibrahim 2009, p. 56).

This technique has been applied in medical fields where a non destructive evaluation is desired, and chances are that it is the technology on watch with most expectation of replacing other conventional methods that are currently employed in medical diagnostics (Bulavinov et al. 2010, p. 68).

Time resolution would play a significant role in resolving small structures of the biological material. Since a highly resolved visualization of side to side targeted material is required, then well focusing will aid in achieving this. But most importantly, when the pulse has a shorter length then resolution of tiny targets separated with shorter distances between is possible in any biological sample (Fenster et al. 2001, p. R69).

According to Ditchburn and Ibrahim (2009, p. 57) beam profile modification produced by phased array probe achieves three major techniques that have been used in scanning these are underlined below.

  1. Linear scanning, this involves a division or a cluster of the ‘array elements’ which are usually pulsed so as to form a profile of beams, once this is in place the focal law producing this profile would be multiplexed electronically along the entire array distance.
  2. Dynamic depth focusing, this involves variation of the focal law by moving the focal points electronically along the beam axis.
  3. Swept angular (sectorial or azimuthal) scanning, in this case the focal laws would be chosen, this would be made to bring about beam steering on a fixed angle with the coming ray or could be made to just sweep the beam over a wider angular range.

These advanced scans have been modified and programmed so as to relatively suit the inspection with e ease. The operator would be required to specify the expected distance of the intended beam focal as well as patterns of scanning, though elements’ time delays would be given by a machine’s software employed (Sharifi and Zadeh 2004, p. 21).

Advantages

High speed utilization which ensures increased rate of data acquisition has been applied in 3D imaging where a 2D image would be reconstructed to give a volumetric image. This feature has made it possible to size tumors, locate masses and have a realistic visualization of various biological parts under examination in medical diagnosis (von Ramm and Stephen 1990, p. 264). Increased sensitivity and coverage are realized by its ability to control the shapes and directions of the beam generated by the machine. Since the complex scan allows for interrogation of a larger capacity of material within a probe, then increased coverage will be realized (Ditchburn and Ibrahim 2009, p. 57).

Their ability of producing immediate images would not only allow a straightforward visualization on the biological internal structure but also simplify interpretation the gotten data. Although this feature is also found in conventional instruments that have been based on mechanical scanning systems, this particular one is mainly restricted to a larger-scale or advanced system applications (Fenster et al. 2001, p. R69; Turnbull and Foste 1992, p. 344).

Disadvantages

As expected, application of high speed would require the technician to read the data gotten very fast and comprehend the information presented, however, this is not possible with human beings. Maintaining the plane of focus is tedious because of the comprehensive interface occurring between the cornea and the contact objective (von Ramm and Stephen 1990, p. 265). This apparatus is also expensive for purchase, the probe is even worse because it costs over five times more than what a single-crystal transducer would cost. It requires skilled personnel to operate it and correctly interpret the acquired wide range of data presentation gotten from the system, which may be very difficult to most users. In case where an inexperienced operator is tasked with its usage, then chances are that there might be limited efficiency of the machine due to inappropriate selection of its settings. Equally, the operators may have a misconception and conclude that a scan represented by a sector which normally arises from a probe stationed stationary would be able to test all the discontinuities within the sweep range.

Another demerit would be attributed to their lack of an accepted inspection criteria and standards as well as calibration blocks which can be applied to these phased arrays. When setting up the apparatus for the first time, one would really take time in getting everything right. Focal law settings like determination of inspection angle, scan patterns as well as focal distance amongst other parameters would really require a keen and thorough review (Bulavinov et al. 2010, p. 69). But once these have been done it is usually possible to save the program for subsequent retrieval in future inspection if need be.

Their large probe dimension also poses some disadvantages in that one would find it hard to attain a good ultrasonic coupling from the inspection surfaces. For this to be achieved therefore, waviness and surface condition of the biological material to be inspected becomes an important factor when using a phased array (Bulavinov et al. 2010, p. 69). When the beam is focused too shallowly, chances are that a deeper discontinuity would be missed.

Application

The key features enabling the application of this apparatus is high numerical aperture brought about by its high light gathering capabilities as well as optical scanning and resolution. These features are attributed to high speed application in examination of biological tissues and cells that are always in constant motion, speed would therefore be of importance when in vivo testing is to be carried out, such medical specialties are; examination of cardiac functioning, examination of cornea and retina (von Ramm and Stephen 1990, p. 265).

Conclusions

If these novel technological advances are fully integrated into their various applications, then life would improve for all the people likely to use it. It is however important for key research on both their known and unknown areas as of applications to be inducted and made clear, these would not only make them safe for human consumption but also fit in the ethics of science. Biomedical science and engineering has proved to be.

Reference List

Auran, D. J., Koester, J. C., Raparport, R & Florakir, J. G.,1994. Wide field scanning slit in vivo confocal microscopy of flattening-Induced corneal bands and ridges. Scanning 16(3), pp. 182-186.

Bordenave, E., Abraham, E., Jonusauskas, G., Tsurumachi, N., Oberle´, J., Rulliere, C., Minot, P.E., Lassegues, M. and Surleve B.J.E., 2002. Wide-field optical coherence tomography: imaging of biological tissues. Appl. Opt., 41, pp. 2059– 2064.

Bouma, B.E. and Tearney, G.J., 2002. Optical source. In: E. Bouma and J. Tearney, eds. 2002. Handbook of Optical Coherence Tomography. New York: Marcel Dekker, pp. 67–97.

Bulavinov, A., Pinchuk, R., Pudovikov, S., Reddy, K.M. and Walte, F., 2010. Industrial Application of Real-Time 3D Imaging by Sampling Phased Array. Moscow: European Conference for Non-destructive Testing.

Ditchburn, R.J. and Ibrahim, M.E., 2009. Ultrasonic phased Array for inspection of Thick- sectioned welds. Victoria-Australia: Maritime Platforms division.

Dubois, A. and Boccara, A.C., 2009. Full-field Optical Coherence Tomography. Applied optics, 63(11), pp. 48-60.

Dubois, A., Greeve, K., Mooney, G., Lecaque, R., Vabre, L. and Boccara, C., 2004. Ultahigh resolution full-field Optical Coherence Tomography. Applied optics, 43(14), pp. 2874-2883.

Fenster, A., D´onal, B.D. and Neale, H.C., 2001. Three-dimensional ultrasound imaging. Phys. Med. Biol., 46, pp. R67–R99.

Fercher, A.F., 1996. Optical coherence tomography. J. Biomed, Opt., 1, pp.157–173.

Gustafsson, M.G.L. et al., 2008. Three-Dimensional Resolution Doubling in Wide-Field Fluorescence Microscopy by Structured Illumination. Biophysical Journal, 94(12), pp. 4957-4970.

Hanley, Q.S and Jovin, T.M., 2001. Highly multiplexed optically sectioned spectroscopic imaging in a programmable array microscope. Applied Spectroscopy, 55, p.1115.

Hoerauf, H. and Birngruber, R., 2002. Optical coherence tomography in the anterior segment of the eye. In: E. Bouma and J. Tearney, eds. 2002. Handbook of Optical Coherence Tomography. New York: Marcel Dekker, pp. 487–503.

Langhorst, M., Schaffer, J. and Goetze, B., 2009. Structure brings clarity: Structured illumination microscopy in cell biology. Biotechnology Journal, 4(6), pp. 858-865.

Liang, M., Stehr, R.L., and Krause, A.W., 1997. Confocal pattern period in multiple Aperture confocal imaging systems with coherent illumination. Optics Letters, 22, pp. 751-753.

Saleh, K.Y. and Smith, N.B., 2004. Two-dimensional ultrasound phased array design for tissue ablation for treatment of benign prostatic hyperplasia. Int. J. Hyperthermia, 20(1), pp. 7–31.

Sharifi, H. and Zadeh, S.H., 2006. New 2D ultrasoumd phased array design for hyperthermia cancer therapy. Int. J. Hyperthermia, 12(3), pp. 18-26.

Shoude, C., Sherif S., Mao, Y. and Flueraru, C., 2008. The Large Area Full-Field Optical Coherence Tomography and its Applications. Open Optics Journal, 2, pp. 10-20.

Toomre, D. and Pawley J.B., 2006. Disk-scanning confocal microscopy. In: J. Pawley, ed. 2006. Handbook of Biological Confocal Microscopy. New York: Springer Science+Business Media, LLC, pp. 221-238.

Turnbull, D.H. and Foste, F.S., 1992. Simulation of B-Scan Images From Two Dimensional Transducer Arrays: Part Ii-Comparisons between Linear and Two- Dimensional Phased Arrays. Ultrasonic Imaging, 14, pp. 344-353.

von Ramm, T.O. and Stephen, S.W., 1990. Real time Volumetric Ultrasound Imaging System. Journal of Digital Imaging, 3(4), pp. 261-266.

Watson, T.F., 1994. Application of high speed Confocal Imaging Technique in Operative Dentistry. Scanning, 16(3), pp. 168-173.

Cell Culture and Biomedical Applications

Biological advancements have contributed to the improvement of society in various forms. Biomedical research which forms the foundation for spectrum of inventions/discoveries, has its origin in techniques like cell culture.

Cell culture involves creating an artificial environment of growing that mimics the natural one in all the characteristical features in a laboratory. So, growing the cells of animals, plants or human beings, yeast and bacteria in a lab setting, constitutes the cell culture. It is mainly employed to examine new drugs and detect infectious agents (Dictionary of Cancer Terms n.d.).

The methodology of cell culture is a bit complicated. In order to achieve the robust growth of cells, specific conditions need to be maintained.

The key process involved in handling living eukaryotic cells initially is mandatory awareness of materials and methods. It involves the use of 370 CO2 incubator, phosphate buffer saline, plastic ware, glass ware, petri dishes, trypsin/EDTA, vialsfor cryopreservation media like DMEM, Hemocytometer is provided with cover slip, DMSO and FBS for cell freezing etc(Protocol: Cell Culture 2012). Initially, cell culture begins with primary culture.

It constitutes a stage where cells are picked from tissue and multiplied in the presence of suitable atmosphere till they become full grown on the platform known as substrate, resulting in confluence (Introduction to Cell Culture 2012)

Here, cells need re-culture known as passaging or subculture achieved by the transporting them to a novel vessel with medium of growth which is fresh to enable more space for the prolonged growth. Primary culture is the important step and prerequisite for any kind of cell culture technique. A failure in proper maintenance of Primary culture could lead to total failure in the overall culture process (Introduction to Cell Culture 2012).

Maintenance of cell culture is firmly linked with safety interlinked with cross contamination issues. This is because, cell culture unit contains many particular dangerous agents linked with hand contact and modifying chemicals, solutions of corrosive nature, tissues and cells of plant, animal or human. The potential dangers are punctures occurring accidentally with needles, spills on the skin, mouth contact through pipetting or ingestion and inhaling infectious agents like sprays, exposures etc.

To overcome these problems, agencies like National Institute of Health (NIH) and Centers for Disease Control (CDC) have provided biosafety recommendations United States. It is mainly focused on four types of biosafety levels (BSL). BSL-1 is the primary option of protection to many laboratories involved in basic and clinical research.

BSL-2 is suitable for medium risk contributors that lead to severe disease in human, that vary deepening on complexity, by contact with percutaneous membranes. BSL-3 is suitable for agents of indigenous nature which have a capacity for transmission like aerosols and that lead to detrimental infections. BSL-4 is suitable for indigenous agents that carry a high risk or fatal by aerosols of infectious nature and no therapy exists (Introduction to Cell Culture 2012).

But laboratories of only high containment possess these agents. Hence, there are specific guidelines that not only ensure safety but also may be helpful to avoid all possible chances of contamination from both prokaryotic and eukaryotic sources.

These are wearing equipment of specific personnel type and replacing contaminated gloves with new ones, disposal of all wastes suspected of contamination, washing hands after contact with dangerous materials prior to the laboratory closing hours, avoiding smoking, drinking, food consumption and storage in the lab, close adherence to the institutional rules and regulations with regard to handling glassware, pipettes, scalpels and needles, lessening the development of aerosols and leakages, removing surface contamination with suitable disinfectant near the work place before and after the experiments, infectious material spills, regular cleaning of laboratory devices as well as instant reporting of the laboratory incidents that occur due to contact with infectious agents to a laboratory authority (Introduction to Cell Culture 2012).

Next, for preventing contamination from sources like sneezing, skin shedding, and spores, dust which serves as the vial constituents of aerosols and airborne particles, employing a hood of cell culture, is essential. Setting up cell culture hood relies on the location where there are a restricted outlets like windows, doors and no personnel movements.

The work place must have only necessary reagents, lab ware and protocols. One must disinfect work place, clean instrument regularly before and after use with 70% ethanol, use ultraviolet light for air and surface sterilization of hood, while using at frequent intervals, as well as maintain the hood in running conditions through the available time and switching it off when there is no work.

In the cell culture, the cell lines are the most important ones to consider (Introduction to Cell Culture 2012).

The cell lines are defined as the products of primary culture obtained by subculture.

Primary culture of the given cell lines possesses a short duration of life known as finite cell lines. When these cell lines are subjected to passaging, the resultant cells obtain a robust phenotypic and genotypic stability marked with lustrous growth potential.

As such, cell line growth is achieved in two ways. One is monolayer or adherent culture

which is achieved on substrates of artificial nature and another one is suspension culture, achieved through medium of free floating nature (Introduction to Cell Culture 2012)

Cell line contamination needs to be understood from the point of view of biological contamination in general. These may be grouped under Bacterial, Mold & Virus, Mycoplasma and yeast types of contamination. Bacterial contamination is recognized by visual observation of culture during the very initial days of infection.

The cultures appear turbid with low pH of the medium and tiny appearance of bacteria. Molds are a special category of eukaryotic microorganisms and infection; in the early stages they contribute to turbidity with visual appearance of spore clumps and thread like thin filaments under microscope.

Viruses are microscopic organisms with high multiplication potential. Infected cell lines can be identified by polymerase chain reaction (PCR), immunoassays, immunostaining and electron microscopy (Introduction to Cell Culture 2012). Mycoplasma are bacteria without cell wall. Their infection of cell lines contribute to altered metabolism of cells, low multiplication potential, suspension culture agglutination, etc.

Detection is possible though PCR, immunassays, and the most important, Hoechst 33258 – fluorescent staining. Yeasts are microorganisms of eukaryotic type and their infection contributes to turbidity, pH variation, with rounded appearance in the culture which can be microscopically observed (Introduction to Cell Culture 2012).

Very often in the cell culture unrelated cell growth could lead to contamination and cell line growth more than the expected limit. This is nothing but cross-contamination which may appear as of interspecies and intraspecies among human cell lines. Possible detection strategies may include cytogenetic analysis and DNA fingerprinting.

Earlier, by employing this approach, the investigators were able to detect nearly cross contaminated cell lines brought from hematopoietic cell lines of different source and those belonging to the original researcher.

This situation of cell line cross contamination could be attributed to constant necessity in the protocol for cell culture viability and identification. Maintenance of multiple cell lines is the contributing factor sometimes and it can be avoided by regular monitoring for specificity and identity, markers, karyotyping and immunoprofile (Drexler, Dirks, & MacLeod 1999).

To better overcome the problem of contamination, U.S. National Institutes of Health has commissioned the utility of authentication of cell line investigations. Here, a private firm Promega has come forward with PCR system in a multiplex format known as StemElite™ ID System (Oostdikv et al. 2009).

This approach better recognizes the contamination in variety of cells like those of mouse and human by making comparison between a standard genotype and genotype developed by StemElite™ ID System (Oostdikv et al. 2009). Even for the plant cell culture contamination detection, the strategy recommended was maintenance of cultures aseptically with regard to Hazard Analysis Critical Control Point (HACCP) by meristem explants and good laboratory practice (GLP) guidelines (Cassells & Prestwich 2009).

Pure cell lines are important for a variety of applications like Blood Factor VIII, Erythropoietin (EPO), hybridoma technology to produce monoclonal antibodies (Applications of Animal cell culture 2009). Large scale culture of cells is done in industries in order to scale up for the development of cell bank systems.

For this purpose, huge bioreactors like compact-loop bioreactor will be used that optimizes the cells in the medium by providing biological, physical and chemical factors. For cultures operated in batches, spinner flasks and Micro Carrier Beads are used for scale up (Applications of Animal cell culture 2009). This indicates that pure cell lines are very important for scale up processes in industries.

It is reasonable to mention that very often impure cell line growth may contribute to adverse reactions in the bioreactors. The impure cell lines may release unnecessary bye products that may become toxic and affect the down stream process. More probably, it may interfere with the routine biological and chemical properties offered by a bioreactor, as mentioned earlier. This may not only alter the yield of the culture but also affect the equipment with great chances of interproduct contamination.

Thus cell culture appears a vital research strategy for a variety of biomedical applications.

References

Applications of Animal cell culture, 2009. Web.

Cassells Alan C & Prestwich Barbara Doyle, 2009, ‘‘. Web.

Dictionary of Cancer Terms: . Web.

Drexler, H. G., Dirks, W. G., & MacLeod, R. A. 1999, ‘False human hematopoietic cell lines: cross-contaminations and misinterpretations’, Leukemia, vol. 13 no.10, pp.1601-7.

, 2012. Web.

Oostdikv K., Petterson A., Schagat T. & Storts D. 2009, Stem Cell Line Authentication and Contamination Detection. Web.

Autophagy Mechanisms: Biology and Medicine Breakthrough

Introduction

The Japanese molecular biologist was recognized for his work on autophagy, i.e., how the cell recycles its own composing elements. The description of the discovery is outlined below, as well as a brief explanation of its significance and the influence it might exert on the development of science while benefitting humankind.

Autophagy is a process where the cell digests and recycles some of its components. It is a key element in many physiological processes that eliminate the redundant cells in order for the new ones to appear. Prior to Ohsumi’s research, the 1960s saw the discovery of the cell’s capacity to transfer its contents in the enclosed membranes to the lysosome, where the contents are recycled.

Description of the Discovery

Autophagy processes were initially studied in the 1960s by Christian de Duve, who eventually received a Nobel Prize in 1974 for his research of the functional and structural cell organization. In 1995, Ohsumi conducted experiments by cloning yeast cells and mammalian cells. He discovered fifteen autophagy genes in yeast. The function of the encoded proteins was thereby clarified, and it was established that the same autophagy mechanisms work for both yeast cells and human cells.

Due to the research conducted by Ohsumi, it is now known that autophagy mechanisms play a crucial role in the cellular reaction to various kinds of stress, including starvation, as well as in the aging processes). The described mechanisms are also crucial for the cellular homeostasis, the differentiation of cells, and other biological processes that necessitate a considerable amount of cytoplasm. Autophagy’s significance for the protection of cells was emphasized, as well as its capacity to counteract infections and strengthen the overall immune system. The latter is called xenophagy and it is crucial for fighting the injuring organisms, boosting the immune responses, and preventing the spread of infections.

Significance and Impact of the Discovery

Even though autophagy mechanisms were partially studied by de Duve in the 1960s, they remained poorly understood until now. Ohsumi’s experiments with yeast and mammalian cells shed new light on the matter. Due to his research, the significance of autophagy in human physiology and disease prevention is clear. The results of Ohsumi’s experiments provide a basis for further research into disease prevention by means of targeting autophagy mechanisms.

Due to Ohsumi’s research, we know that in the autophagy processes, the membrane structure develops in the cytoplasm, which is followed by sequestering the elements of cytoplasm (2). The emerging structure is enclosed and sealed into a double-membrane formation, i.e. autophagosome. The results of the research demonstrated that Atg proteins are involved in the process of autophagosome construction (2). It was also demonstrated that the vesicles of Atg9 protein become a part of the autophagosome membrane.

Moreover, Ohsumi’s discoveries revealed that in the case of selective autophagy, autophagosome formations select certain cell contents that are to be recycled (3). The contents that are to be degraded are delivered to lysosomes in mammals and vacuoles in yeast and plants (3). Selective autophagy is demonstrated to be highly significant since it also degrades intracellular pathogens, as well as certain types of damaged organelles (3).

It was established that autophagy mechanisms are linked to the onset of many diseases, including Parkinson’s disease and type 2 diabetes. Thus, the significance of Ohsumi’s discovery is clear. The importance that autophagy mechanisms have in the processes of disease prevention and aging ensures that there is a possibility of developing new treatments and prevention methods. Moreover, mutations that occur in the discovered genes linked to autophagy mechanisms might indicate the possibility of a congenital disease.

Certain irregularities in autophagy genes have also been associated with cancer and the development of certain neurological conditions. Research is now underway to clarify the potential correlation with various diseases.

The Nobel Committee called Ohsumi’s research “paradigm-shifting” because it is a huge step forward in the fields of biology, medicine, and disease prevention. His discoveries have changed our understanding of the cell and its recycling capacity. A study by Ahn et al. suggests that autophagy mechanisms play an important role in tumor suppression processes (1). The implications of Ohsumi’s discovery are deemed to be significant in the field of biology and medicine.

Conclusion

Yoshinori Ohsumi’s research has a considerable impact on the development of biology and medicine because it changes the way we understand the autophagy mechanisms in human cells, as well as the processes of disease pathogenesis. The results are of utmost importance since they are promising in respect of medical treatments and preventative care of many diseases, including neuropathy and cancer. The results of Ohsumi’s work should provide a basis for further research and development of new treatments and prevention methods.

References

  1. Ahn, J.-S., Ann, E.-J., Kim, M.-Y., Yoon, J.-H., Lee, H.-J., Jo, E.-H., Lee, K., Lee, J.-S., & Park, H.-S. (2016). . Web.
  2. Suzuki, S. W., Yamamoto, H., Oikawa, Y., Kondo-Kakuta, C., Kimura, Y., Hirano, H., & Ohsumi, Y. (2015). . Web.
  3. Nakatogawa, H., & Ohsumi, Y. (2014). . Web.

Female Bodies in Science and Biomedicine

The assigned readings focus on the ways the female body is regarded in the context of biomedicine. The authors share their views on such areas as the reproductive system, cancer, and disability. It is stressed that females still face different types of discrimination as gender is socially constructed, which is manifested in various spheres. The readings shed light on numerous details that often remain unnoticed and taken for granted. This paper includes a brief summary of these works and some reflections on the points mentioned.

It is necessary to start the discussion with the work by Wendell (2015). In her brief writing, the author stresses that disability, as well as gender, are socially constructed. This social construction defines the development of disability and the way people with special needs are treated. I totally agree with the author’s perspective on the matter as modern society contributes to the development and spread of discriminatory views and practices. The attitude towards people changes when their disability becomes known. For instance, people’s attitudes will change once they understand that the person they want to address has some hearing disabilities.

The social construct is also present in the way healthcare professionals and researchers treat females and issues related to female health. For instance, Martin (1999) emphasizes that even aspects that could or had to be seen as neutral have certain negative connotations. Thus, the female reproductive system is regarded as degrading and rather less efficient as compared to the male reproductive processes. After reading this article, I felt that there are some instances of this kind of discrimination. It is quite obvious that the scientific world is gender-biased, which has a significant impact on health care.

Fausto-Sterling (1999) provides a clear case for the link between biased attitudes in research and health care. The author describes the ways people have seen and treated menopause. When reading this article, I was constantly thinking about the way women’s health is regarded. The concept of degradation persistent in research and practice led to the development of drugs and treatment that could address only a limited number of symptoms. Women turn out to be rather vulnerable as their condition is still regarded as unavoidable degradation rather than a natural physiological process.

Finally, two other readings focused on the way women with cancer deal with their health condition. Lorde (1999) notes that the feminist perspective on cancer, as well as females’ issues related to this illness, is still under development. I agree with this statement only partially as this topic has acquired considerable attention in western society recently. However, there are still certain gaps associated with specific theoretical paradigms. The article by Kosofsky Sedgwick (1999) adds to this discussion as the author claims that gender shapes the way healthcare professionals treat females. However, females’ needs are often ignored. Thus, the healthcare professional focuses on symptoms and previous health history without trying to really talk to a person. The major aspect that resonated with me was the physician’s unwillingness to see the patient as someone who can be equal rather than inferior to healthcare professionals. Practitioners are often ignorant of and reluctant to hear their patients’ remarks concerning their health. Professionals tend to focus on their expertise believing that their patients’ opinion has little relevance.

In conclusion, it is necessary to note that gender is one of the constructs shaping the way people are treated and healthcare services are provided. It is essential to continue voicing all these issues and make people (both men and women) discuss them and acknowledge their influence. It is important to make sure that all people receive equal treatment that addresses their needs.

References

Fausto-Sterling, A. (1999). Menopause: The storm before the calm. In J. Price & M. Shildrick (Eds.), Feminist theory and the body: A reader (pp. 169-178). New York, NY: Routledge.

Kosofsky Sedgwick, E. (1999). Breast cancer: An adventure in applied deconstruction. In J. Price & M. Shildrick (Eds.), Feminist theory and the body: A reader (pp. 153-156). New York, NY: Routledge.

Lorde, A. (1999). A burst of light: Living with cancer. In J. Price & M. Shildrick (Eds.), Feminist theory and the body: A reader (pp. 149-152). New York, NY: Routledge.

Martin, E. (1999). The egg and the sperm: How science has constructed a romance based on stereotypical male-female roles. In J. Price & M. Shildrick (Eds.), Feminist theory and the body: A reader (pp. 179-189). New York, NY: Routledge.

Wendell, S. (2015). The social construction of disability. In S. Shaw & J. Lee (Eds.), Women’s voices, feminist visions: Classic and contemporary readings (pp. 101-107). New York, NY: McGraw-Hill Education.

Biomedical Mechanical Engineering and Mechanical Prosthetics

The main aim of writing this paper is to cast light upon the history of biomedical mechanical engineering and mechanical prosthetics which are so demanded in our society. More than that, this paper provides different models of prosthetics and their advantages and disadvantages compared to their costs. This material will be interesting not only for specialists in this area but for people who have to use prosthetics that facilitate their life.

Biomedical mechanic engineering aims at the researching of a human body and production of the special tools which facilitate the life of people who have been injured. Biomedical mechanic engineering appeared with the use of x-ray machines and electrocardiographs which made it possible to make a diagnosis with the use of technology. The boom of the development of biomedical mechanic engineering is dated to the period after World War ll.

The use of researches in mechanics and biology helps to produce prosthetic devices which resemble human limbs and can move. Mechanical prosthetics are well-developed in modern medicine. This branch of medicine concerns itself with the replacement of the missing parts of the body. Prosthetics has a long history and researchers and specialists have done their best to facilitate the life of the disabled. Although nothing can fully replace a part of the body, prosthetics make the life of the disabled better. People have used different artificial devices to compensate for the loss of the limb. A forked tree limb was used to replace the leg as a crutch in ancient times. The history of prosthetics is dated to 300 B.C. People used crude devices to compensate for the loss of the limb. Early prosthetics were made by armor makers, blacksmiths, and other specialists who were very skilled at the work with wood, metal, and leather. One of the first references to the use of prosthetics is observed in the works of the French surgeon, Ambroise Pare, in 1579 where he described the methods of producing prosthetics used by him in the practice. Having been a military surgeon, Pare amputated soldiers’ shattered arms and legs and worked over the methods of designing and producing prosthetics (Waverly, 2009). According to German history, the Knight of Iron was described to have an artificial hand which fingers could even move (Pederson, 2008). In the 1700s prosthetics made from metal were ousted by the wood and leather ones and further on, incorporated joints began to be used instead of the stiff solid limbs.

Prosthetics were in need during wartime. During the American Civil War (1861-1865), there were nearly 30,000 amputations that needed some decision for the disabled. Wooden sock limbs produced in New York cost from 75$ to 150$. During World War ll (1939-45), new materials were used in the production of prosthetics, namely aluminum and plastics which made the use of prosthetics easier and more comfortable. With the beginning of the Vietnam War during the 1960-70s years, there was the necessity to improve the production of prosthetics and there appeared an electronic control that facilitates their use and made them more resembling on human limbs.

Biomedical mechanical engineering and mechanical prosthetics are well-developed nowadays. The mechanic prosthetics of heart valves, feet, and hands are the main achievements in this field. The pioneer of the production of the mechanical valvular prosthesis was Dr. Charles Hufnagel in 1952. He inserted “a Plexiglas cage with a ball occludes into the descending thoracic aorta”. The first implant was inserted in 1960. There were a number of different caged ball designs produced by the Magovern-Cromie, Smeloff-Cutter, and DeBakey-Surgitool (Mechanical Heart Valve, 2008). Nevertheless, those caged ball implants had a lot of disadvantages. It resulted in a larger pressure drop across the valve and distal higher turbulent stresses. The next significant achievement was the introduction of tilting disc valves produced by Bjork-Shiley in 1967. A further step in the development of heart valves productions was the introduction of the bileaflet valve in 1978 by St. Jude Medical Inc. During the last five years, there were produced different models of heart valves prosthetics, nevertheless, three of them continue to be widely used in medicine, namely the cage ball, tilting disc, and the bileaflet designs. There is a list of advantages and disadvantages in every model of prosthetic.

Table 1. Characteristics of most popular valve designs

Valve
design
Design characteristics Design related drawbacks
Caged ball Time tested.
Structurally sound – built in redundancy in the strut design.
Low levels of regurgitation in the closed phase.
Relatively large valve height.
Flow separation downstream of the valve, which might lead to thrombus formation.
Tilting
disc
Better hemodynamic characteristics than the caged ball design.
Lower valve height and hence suitable for all anatomical locations.
Maximum number of valves used till date is of the tilting disc design.
Lower levels of redundancy in the cage strut structure.
Lower minor orifice flow can lead to tissue over growth and thrombosis.
Strut fracture and related complications have occurred in certain models.
Bileafle Uniform flow profiles.
Lower levels of structural complications.
Hinge design prone to thrombus formation and valve failure.
Leaflet escapement reported in certain models.

As for the development of designing feet prosthetics, there are also a lot of models which are actively used in modern medicine. Nevertheless, it is impossible to produce flexible prosthetics which function in the same way as human feet. The design of prosthetics depends on the goals which must be achievedFirst of all, the foot prosthetic must be easy to carry and light in weight (Hansen, 2012). Cosmesis and volume are also very important goals. Modern feet prosthetics function as human parts of the body. Feet prosthetics allow standing, walking, and even running. The price of such models depends on the complexity of the design and its functions.

The prosthetics of hands are more complicated in their structure as far as all parts may be flexible and function as fingers. The hand is a very important part of our body which has a lot of functions necessary for our living. Early designs were quite bulky with electromechanical motors. New designs guarantee a 12 degree-of-freedom range of movement. They are more comfortable, more lightweight, and may be used both with the artificial or normal hand. The mechanical design consists of a combination of high-strength polymers such as PTFE and PEEK and low-density metals such as Titanium which make the prosthetics more flexible and resemble a human hand.

Prosthetics facilitate the life of disabled people. They do not want to be taken by other people like the disabled. They want to be equal to others and lead a life of full value. There are three main advantages of using prosthetics: energy, mobility, and psychological ones. People need less energy walking or doing something with the help of prosthetics compared to the use of crutches or other devices. If there is a choice between a wheelchair and the use of prosthetics, most people choose the last ones as far as prosthetics are more convenient and mobile. More than that, there are a lot of places which are not accessible by wheelchairs. According to Doug Hewitt, “Prosthetics provides a greater sense of independence” (2011). People using prosthetics have a more positive psychological outlook on the world than those who use crutches and wheelchairs. According to the Amputee Coalition of America, “amputees feel less discomfort with their conditions when wearing prosthetic legs because of the ability to blend in better with the crowd” (Hewitt, 2011).

Although there has been great success in the application of prosthetics, there are also complications that comprise primary valve failure, prosthetic valve thrombosis as well as mechanical hemolytic anemia (Johnson, 2010). Various complications come hand in hand with the applications of prosthetics. Some of these complications vary from severe to non-serious ones. There are a lot of cases of postoperative infections. For example, the prosthetics of the leg and hand may cause dermatitis in the place where the amputated limb is connected with the prosthetics. It may be the reason for ulcerations, infections, malignancies, and allergic contact dermatitis (ACD). More than that, prosthetics may be the cause of tissue proliferation at the amputation site which leads to the loss of feelings. Lumbar disfiguration may be caused by wearing the prosthetics for long period (Short-Term & Long-Term Effects of a Prosthetic Leg, n.d.). Another problem concerns with mechanic failure of prosthetics which causes a lot of non-serious and serious consequences. First of all, it may cause pain. That is why it is very important to follow all doctor’s instructions using prosthetics and to be examined regularly.

The cost of the prosthetics is another big challenge to the patients who lack some of the body parts as in acquiring them need one to be financially stable thus, excluding some from benefiting from them. The prosthetics also require frequent checkups to ensure that they give the required purpose to the patient. In some organs, it’s hard to mimic the natural organ and hence, some of the functions that can be performed by the normal hand or organ cannot be done by the mechanical prosthetic organ.

The cost of the prosthetics is in a position to go down with time as more biomedical engineers are coming up each day hence the prizes of the various prosthetics will go down as there will be competition from the different companies that will aim at marketing their products. A prosthesis leg may cost from $5.000 to $50.000 and the arm may have cost from $3.000 to 30.000$ (Turner, 2009). The costs of prosthetics depend on the material used in their production and the functions they have. Hand prosthetics are the most expensive and not everyone can afford to buy them. The muscle control over the arm is also in a position to increase as more flexible prosthetics are being formed that will help to control the muscle. This can be based on the comparison between the modern and the current kinds of prosthetics that are being manufactured (Turner, 2009).

The government recommends that the field of prosthetics as well as orthotics ought to be selected under the Act since the candidate group does not contain the required managerial resources as well as the monetary resources, to control the business by the act. The families recommend that the use of orthopedic brace be improved as this would help more people who suffer from these conditions. Also, the families recommend that the cost of the prosthetics be reduced as the high costs discriminate the poor from benefiting from the mechanical prosthetics that can only be acquired at high costs. More than that, there are doctors’ recommendations concerning the usage of prosthetics. There are a lot of consequences when the patients don’t follow the doctor’s instructions and use prosthetics in an inappropriate way. It should be noted that in every field the biomechanical engineering is necessary to improve. The government should support financially medical projects of improving prosthetics as far as the life of many people depends on them.

In conclusion, the various interventions on prosthetics ought to be emphasized as this has so far shown much help to the patients who lack the organs that can be replaced by artificial organs. Disabled people should be supplied with the necessary prosthetics despite their costs. The government should take care of these disabled people. In the same, the government should provide assistance to the engineers involved in making the prosthetics to ensure that they get the required resources needed.

References

Dasi, L., Simon, H., Sucosky, P. & Yoganathan, A. (2009). .

Hansen, A. (2012). Prosthetics: Foot and Ankle Prosthetics. International Encyclopedia of Rehabilitation. Web.

Hewitt, D. (2011). Advantages of Prosthetic Legs. Web.

Mechanical Heart Valve. (2008).

Pedersen, B. (2008). The History of Aesthetics of Prosthetics.

Short-Term and Long-Term Effects of a Prosthetic Leg. (n.d.). Web.

Turner, R. (2009). Prosthetic Costs. Disabled World towards tomorrow.

Waverly, J. (2009). The History of Prosthetics Legs. Web.

The Relation Between Patients and Biomedicine

Introduction

Patients have questioned the effectiveness of biomedicine since the start of the Internet era. Free access to information is used to be considered as an incontestable benefit. Still, there is a question of how reliable online information could be. Lay people cannot fully understand medical issues, distinguish between peer-revised and unreliable sources. The number of members of online health groups increases, and these are the communities where unreliable data could be easily and quickly shared. It affects patient-provider relationships as patients try to participate in decision-making having no professional competence. I think nothing could be done with the growth of a number of online health communities and patients’ deep concern in biomedicine; the challenge is to deal with these phenomena to minimize their negative effects.

Main body

I have looked over the Diabetes Support Group site and may say that the main concern of patients is the everyday management of the disease. There are a lot of messages from newcomers putting the simplest questions, asking for advice (DSG). People who are not used to living with diabetes, feel at a loss. Hilliard et al. state that “diabetes self-management is complex and demanding, and isolation and burnout are common experiences; the Internet provides opportunities for people with diabetes to connect with one another to address these challenges” (261). The main point patients try to achieve is soothing, they look for online social support.

Online health groups allow people to benefit from support. It could be both important for the sick and their caregivers. With online communication, these categories do not feel isolated in managing everyday demands. A member of a community does not need expertise and knowledge to calm one’s companion. A patient benefits from online support until the moment he or she starts considering an interlocutor as a medical authority. According to Centola and van de Rijt, “little is known about how people select their health contacts in these virtual domains” (19). Most of the group members are not medical doctors. These are people interested in biomedicine, but their practical disease management tips are not to be followed.

I think laypeople cannot fully understand medical literature, and their ability to make “informed decisions” is arguable. A lot of patients get interested in biomedicine as the internet provides an enormous quantity of sources. Social media do not stop generating healthcare-related information, but its credibility is a challenge (Hajli et al. 238). If assuming the information is credible, it should be understood correctly. Biomedicine issues need a complicated approach and scientific background. The price of wrong self-made decisions is own health or the health of other members of the online group who follow the advice of a false expert.

The pro-active position of patients could be beneficial regarding following official treatment protocol, not in decision making. The situation seems optimistic, as Rupert et al.claim that “participants described online health communities as supplementing information from healthcare providers, whom they perceived as too busy for detailed discussion. Almost all participants shared OHC content with HCPs” (326). It should be stated, that health-seeking behaviors exclude amateur performance.

The number of patients who have extensively researched their ailments grows. Some healthcare providers believe that biomedicine information from online communities has a negative influence on the relationships between patient-provider (Rupert et al. 326). From professional ethics, there is no way for irritation. All necessary explanations should be given regardless of the meaningfulness of questions. If handled properly, people with deep medicine interests could be transformed into the most attentive patients.

There are a lot of dissatisfied with modern biomedicine patients, and all the more people seem active in their health care. Free access to information drives the situation. There are a few reasons for patients’ dissatisfaction, and none of them is connected with the decrease of medical competence level. First of all, the flow of data, instead of making people more informed and confident, embarrasses them. It irritates. Secondly, a lot of people get used to relying on “an opinion from the internet”; usually, some member of an online health group becomes an authority. The differences in approaches between a doctor and an internet companion put a question of whom to believe first. Someone has to be ignored, and the necessity to choose annoys. On whole, free access to information and lack of critical reasoning cause dissatisfaction with biomedicine.

Conclusion

How to deal with the patients arguing the effectiveness of biomedicine is an acute question nowadays. The amount of data, accessible online is going to grow. Also, the online health communities, sharing among others unreliable information, gain power. The challenge is to protect patients and to persuade them that online groups could serve only for social support. Monitoring of popular groups’ content could be helpful. The problem concerns the low level of critical reasoning. The right approach would be to pay attention to its development starting with elementary school pupils. To overcome the dissatisfaction and to regain the trust to biomedicine, healthcare providers should take their time explaining and dispelling myths from the Internet.

Works Cited

Centola, Damon, and Arnout van de Rijt. “Choosing Your Network: Social Preferences in an Online Health Community.” Social Science & Medicine, vol. 125, 2015, pp. 19-31.

Diabetes Support Group, 2018. Web.

Hajli, Nick M., et al. “Credibility of Information in Online Communities.” Journal of Strategic Marketing, vol. 23, no. 3, 2015, pp. 238-253.

Hilliard, Marisa E., et al. “The Emerging Diabetes Online Community.” Current Diabetes Reviews, vol. 11, no. 4, 2015, pp. 261-272.

Rupert, Douglas J., et al. “Perceived Healthcare Provider Reactions to Patient and Caregiver Use of Online Health Communities.” Patient Education and Counseling, vol. 96, no. 3, 2014, pp. 320-326.

Biomedical Informatics and Pharmacovigilance

The article by Beninger and Ibara (2016) advocates for improvement in pharmacovigilance with the integration of biomedical informatics advancements. The research practice had evolved since 1961 when the thalidomide strategy was adopted. In addition, the collaborative efforts seen between individuals and organizations have supported the establishment of the agendas. The analysis of the study makes it possible to assess the measures taken to enhance the role of biomedical informatics in healthcare.

Beninger and Ibara (2016) define pharmacovigilance as the science and associated activities in the assessment, recognition, comprehension, and avoidance of adverse effects relating to drugs. The review aims to establish a new perspective of the concept, particularly from the lens of biomedical informatics. Luo et al. (2017) note that there have been numerous vital developments in this field. However, studying the basis for the growth of knowledge in this discipline may be of good use to the medical community.

The research in question illuminates the implications pharmacovigilance has on biomedical informatics. According to Beninger and Ibara (2016), the field has contributed to significant infrastructural advancement since its inception. However, as Hauben et al. (2018) note, the critical systematic assessment of the influence of biomedical informatics on promoting pharmacovigilance remains unexplored. Rapid developments pose a challenge in integrating technology to enhance the safety of information transmitted across different platforms. Therefore, the authors advocate for the rethinking of the integration of technology (Beninger & Ibara, 2016). The main purpose of this solution is to maximize gains in pharmacovigilance.

Biomedical informatics has been pivotal in advancing infrastructural gains in pharmacovigilance. Although the pace is accelerating with time, the anticipated integration of concepts between the two areas remains low. Beninger and Ibara (2016) suggest that those in the field should recognize the changes and take necessary measures to leap maximum merits. Additionally, since the information comes from diverse sources, professionals should adopt structures for consolidating the improvements.

References

Beninger, P., & Ibara, M. A. (2016). . Clinical Therapeutics, 38(12), 1-12. Web.

Hauben, M., Reynolds, R., & Caubel, P. (2018). . Clinical Therapeutics, 40(12), 1981-1990. Web.

Luo, Y., Thompson, W. K., Herr, T. M., Zeng, Z., Berendsen, M. A., Jonnalagadda, S. R., Carson, M. B., & Starren, J. (2017). . Drug Safety, 40(11), 1075-1089. Web.

The Analysis of the Results of the Biomedical Research

The article is devoted to the analysis of the results of the biomedical research carried out in Lebanon and the United Arab Emirates (Bissar-Tadmouri & Tdmouri, 2009). The closing paragraph enlightens the problem of the cost-effectiveness of the stated health care program. Founding upon the results of the research, the authors claim that successful biometrical research requires the presence of well-developed global networks. The analysis has shown that this system needs a proper reformation. Although the improvement of network system is apt to demand extra expanses from the UAE, the scientists are firmly convinced that this development will be highly beneficial for Arabic international links. According to the authors’ opinion, the investment in biomedical science will be compensated by the favorable outcomes of the carried out researches. Therefore the authors regard biomedical researches as a cost-benefit measure that can influence positively the health care system of the regions mentioned above.

Reference

Bissar-Tadmouri, N., & Tdmouri, GO. (2009). Bibliometric analyses of biomedical research outputs in Lebanon and the United Arab Emirates (1988-2007). Saudi Medical Journal, 30(7), 130-139. Web.