Scottish Parliament Project Management Issues

Introduction

Construction of the current Scottish Parliament Building started in June 1999 and the first debate in the house took place on September 7th, 2004. Although it was initially planned to be in use in 2001, it did so in the year 2004.

This was a delay of more than three years than it was initially planned with the final cost of £414m being many times higher than the original estimated cost of £10m to 40£. Due to this escalation in cost and the delay that followed, a public inquiry team headed by Peter Fraser, former Lord Advocate was established back in 2003 to investigate construction. This was a result of the constructive criticism that came all over from the politicians, the Scottish public, and the media. In September 2004, the inquiry team concluded and criticized the construction management on the way it implemented major design changes that resulted in cost increment and delay, Auditor-General Report, (2000).

Reasons for project delay and cost increment

The initial estimates of constructing the building were between £10m to 40£ in 1997 but by 2004, the final cost of the project was estimated to be £430m, this being ten times higher than the original cost. There were several reasons given in respect to this; Auditor-General Report, (2000).

  1. In 1997, the original cost projection of £10m to £40m was for housing members of the Scottish parliament without taking into account the design and location of the new building.
  2. On 1998, July 6th, Miralles design is chosen and the figure was updated to £50 -£55 m. This did not include site acquisition costs or VAT.
  3. A provisional cost estimate of about £109m was issued by then Minister Donald Dewar on June 17, 1999, which took into account site costs, consultancy fees, VAT, demolition, risk, archaeology, and contingencies.
  4. On April 5th, 2000, a new projection of £195m is issued out.
  5. An official new report was issued out in November of 2001 of an estimated figure of £241m. The increased cost was to take into account the escalated cost due to design changes and increases in space. Construction problems due to the attempt to finish the project by May of 2003 was another reason given out that resulted in cost increment at this point. Sir David Steel who was then the presiding Officer informed the Scottish parliament Finance committee that re-scheduling of work was resulting to cost increment.
  6. In December of 2002, a new estimate of £300m is given. First, due to increased security needs, the escalated cost goes up to £295m by October of 2002. This new security need required that a bombproof external fabric be incorporated into the building. The delays that occurred at this time raised the cost farther by December of 2002n to £300m.
  7. George Reid, the new presiding officer on account of increased consultancy fees gave out a new monthly report on the schedule and estimated cost of £373.9m. Due to construction problems on the interior of the building, the new cost was estimated to be £400m by September of 2003.
  8. By February of 2004, there was another cost increment. Again it was associated with construction problems and it was estimated to be £430m.
  9. The building was opened in October of 2004 and the final cost of £414.4m which was less by 16.1m from the previous estimate was issued out to the Scottish Parliament Corporate Body.

There were a lot of controversies in this construction that resulted in cost escalation and delay. First, the unique Scottish Parliament architecture was complicated by cost increment and design changes. Other controversies included: the decision to have a new building, site selection, incorporation of the non-Scottish architect as well as having Bovis as construction manager who had earlier on been excluded. At the time when the cost was increasing, the cross-party Scottish Parliamentary Corporate Body (SPCB), took over the control of the building project from the Scottish Office. Even with all the controversies and heightened media attention on the Holyrood project, a vote to continue with the project worn in a debate held by the Scottish Parliament, Bain, S. (2004).

In August 1999, a further 4,000 square meters (43, 000 sq ft) was proposed by the architect. Due to this increased floor space, the cost had increased to £115m by September of 1999. An independent report submitted by architect John Spencely commissioned by SPCB indicated that there was a possibility of saving up to 20% from the ongoing project and thus shifting to another site or abandoning the project entirely would result in £30m additional cost. Poor communication between construction officials and SPCB was one of the reasons cited by Spencely that was causing increased cost. On account of Spencely report, a debate by MSPs to continue or abolish project on the Holyrood site worn with the majority on April 5th, 2000, Balfour, A & McCrone, G. (2005).

The death of Miralles in July 2000 and the presence of a multi-headed client that consisted of the SPCB, architectural advisor, and the presiding officer further complicated the project. This resulted in the client taking over from the Scottish Executive formerly known as the Scottish office, the running of the project. This led to design changes on account of security that saw cost increase. Later on this new proposal to incorporate more security was rejected as it was seen to be the major factor that was increasing cost, Balfour, A & McCrone, G. (2005).

Quality was preferred to cost as completion was to cost although no significant acceleration was ever achieved. Only when it was late to make a significant change that the architect’s complex design and the inevitable cost was appreciated.

Role of project management

The project management team consisted of the clerk and Chief Executive of the Parliament who was also the Principal Accountable Officer and the project team led by the project director. The responsibilities of managing and ensuring successful delivery of the project lie entirely with the project management. In place of the Scottish Parliament, the Holyrood Progress Group was mandated to guide as well as advise the project management team. The progress Group however was not to be held responsible for the delivery of the Holyrood project Taylor, Brian. (2002).

The decision to have the construction management take over as the main force behind the project was one of the reasons that caused a 20-month delay. In most public sector building projects, construction management is unsuitable. In the case of Holyrood, this wasn’t necessarily a wrong choice. Its main role is usually to transfer risk to those who can better manage it. The risk stays with the client under construction management since the design is usually uncertain and incomplete when construction begins, Burke, R. (2003). Thus the client is mandated to manage design developments as well as get a team of construction professionals.

Unfortunately, construction management was not fully implemented in the Holyrood project. The construction management team lacked the expertise as well as the experience required in the early stages of the project. Thus the challenges and the risks were not appreciated fully by the project management and the client.

There was also the challenge of constructing an unusual complex building against tight deadlines on a site that was densely developed. It could also be appreciated that there was no slippage room from the original time plan that was highly compressed and challenging to follow. Out of these factors thus, some of the trade contractors and architects could not have some of their critical duties delivered on time, Tanner R. J. (2001).

The project management failed to effectively control the project design development as the project was characterized by major design changes all through. Thus the management failed to appreciate the project’s complex design early enough. Unfortunately, most of these design developments took place during the period of construction and due to emphasis on high quality and time pressures, it was difficult than normal to monitor the process of the design development.

The main cause of slippage was the release of design information at a later date than the one agreed on the construction managers’ program. Design elements that were provided by the trade contractors and required the approval of the design team further delayed the project. Tight completion dates set by the client resulted in some of the work being unproductive and out of sequence thus adding to the delays. The repeated slippage was supposed to be a signal to the project manager to monitor performance. This they could have achieved by measuring project achievements against contractual obligations and then enforce obligations strictly.

Although there were enormous problems associated with the project, the project management failed to address the main causes of these problems. The client’s program was involved because of the contacts that had a high degree of uncertainty. It was thus difficult for the client to resist claims for extra time-related added costs from contractors, Auditor-General Report, (2000).

Strategies that ought to could have been taken to save on cost and delay

There are different strategies that can be incorporated to avert this kind of delay and cost escalation in the future;

  1. For most of the building projects in the public sector, the construction management approach is not appropriate and should thus have been avoided.
  2. The contracting method should have been selected with care after understanding the risks involved and how they will be averted.
  3. The project should have been scrutinized at different stages through the gateway review process.
  4. To encourage good performance, there ought to be performance incentives to the contractors.
  5. Control and leadership of the project should have been centralized.
  6. Performance assessment should have taken place throughout the project.
  7. Adequate time of planning ought to be allocated before the project was started, Barrie, D. S. & Paulson, B. C. (1992).

Good management that should be incorporated in the future should have some of these desirable characteristics. Should have a clear understanding of the key stages of the construction and the risks involved with the construction work to come up with ways in which to reduce or avert them. The management should also be able to select competent people as well as organizations that have the required skills to complete the construction.

Project monitoring should go along with the review of the budget and the key project milestones. This should be done using timely and reliable information to ensure that appropriate remedial action is taken on time whenever necessary. There should be effective coordination and communication between all those who are involved in the supply chain. The right personality of the managers is a key requirement if they have to create as well as lead a team, Burke, R. (2003).

In case there is competition for a designer, contractor, or consultant, several recommendations could be made:

  1. Pre-qualification questionnaires should be evaluated in an orderly manner.
  2. There should be consistency in the making of visits to the candidates’ offices.
  3. From start to conclusion, there should be a fully transparent completion record.

A rigorous and full evaluation of architects should have been carried out when international architects were involved to ensure that practices and working cultures were compatible. In the place where construction management as a means of procurement route is used and the client and finally the taxpayer is the risk bearer, a hard long reflection should be carried out by the local government officials and civil servants on all the advantages as well as the disadvantages of using such route. A full report of the evaluated risks should be set before the political leadership, Taylor, Brian. (2002a).

Since the United Kingdom including Scotland is a member of the European Union, it’s obligated to observe all procurement rules. Since not everyone in the inquiry had adequate knowledge of the rules, it’s important in the future that no one should be put in a public project if he/she doesn’t appreciate EU procurement rules. In cases where independent professional advisers are retained, their views should be put both before the Civil Service officials, other disagreeing parties as well before the Ministers. As the government is clear with the private sector projects, so it should be public projects where civil servants are involved, Taylor, Brian. (2002b).

Due to security concerns of public buildings, their safety and security should be a primary integral part of the initial assessment of the proposed design and not a later thought that keeps on changing. The presiding officers should even stand oral questioning in the case where such major bleaches on safety and security are encountered as this is seen as a loophole for cost escalation.

Conclusion

Scottish Parliament building project faced enormous challenges. Weaknesses in financial control and cost reporting were contributed by the lack of an approved budget. Since the client was left with most of the construction risks due to the construction management procurement method, it was important to manage contractors’ performance and to have a distinct leadership plan. It is unfortunate that several parties were involved in the leadership and giving out directions of the project which further complicated its control. The fact that there was no centralized point of control was a weakness in the system that lifts the blame of delay from the individuals who did their best in this complex challenging project, Brown K. M. and Mann A. J. (2005).

The tension between time, quality, and the cost was seen all through the project. When time criterion was set too tightly, the flow of the design failed to meet expectations. Thus when time parameters are tightly set and construction cannot move with the program, then a cost penalty is encountered as in the case of this project. The design flow could not keep pace with the program set forth by the client. This was the responsibility of the architect due to the indifferent communication and coordination between Barcelona and Edinburgh. After Mr. Brigg gave out his report in 2002, it was important for the client to understand that more time is required for high-quality design work and thus with the complex designs, the program was unrealistic, Brown K. M. and Tanner R. J. (2004).

The architects ought not to could have signed up for programs that they could not honor. This could have given more time for accurate programming that would ensure that that the anticipated design is achieved on time. The unique complexity and quality of the building were both the most important factor of the building. If this was appreciated early enough, then it would have been clear that the completion date and the program were highly likely to be affected and thus a significant extra cost would have been anticipated on time and discussed upon.

Although security considerations were done quite early enough, cost implications had been underestimated by all including the client. An estimated £100m extra cost is attributed to security. Security could be seen as a safe scapegoat to unnecessarily raise the cost. Production of design variations as well as late delivery of information during construction caused all the slippage. The project management should have done more to address these problems. With proper management, the same high quality could have been achieved.

Control and leadership of the project were not well established. Holyrood lacked a single point of control and leadership. Another shortcoming of the project is that management of individual aspects such as cost, time, and quality were not properly allocated among the various parties. The running of the project became further difficult since the different leading project parties never fully agreed on a one-cost plan. The project management, construction manager, and the design team ought to could have agreed on a cost plan to have sound management of the project.

There are thus lessons to learn from the Holyrood project that may be applied to other significant public building projects. The main lesson is that the contracting method should be chosen with a lot of care having appreciated the benefits and the risks of each procurement option. The contracting firms should be able to transfer risk to those who are best positioned to manage it.

‘Gateway reviews’ introduced by the Office of Government Commerce for procurement of major public projects should be utilized in the future. It has the advantage of allowing thorough scrutiny of the project needs by a qualified team before the award of contracts. Irrespective of the method chosen, ample time should be allocated for planning before construction begins. Good planning will see that the construction sequence is right to avoid extra costs and delays. It would also entail managing and assessing project risks as well as assessing the different parties’ contribution to avoid inefficiency and wastage of resources, Auditor-General Report, (2000).

Reference

Auditor General Report. 2000. “The new Scottish Parliament building, an examination of the management of the Holyrood project for Scotland.

Barrie, D. S. & Paulson, B. C. 1992. Professional Construction Management, Third edition, Singapore, McGraw-Hill International Editions.

Burke, R. 2003. Project Management: Planning and Control Techniques, Fourth edition, London, Wiley & Sons.

Bain, S.2004. “Holyrood – The Inside Story”, Edinburgh University Press, ISBN 0-7486-2065-6.

Balfour, A & McCrone, G. 2005. “Creating a Scottish Parliament”, StudioLR, ISBN 0-9550016-0-9.

Taylor, Brian. 2002. “The Scottish Parliament: The Road to Devolution”, Edinburgh University Press, ISBN 0-7486-1759-0.

Taylor, Brian. 2002. “Scotland’s Parliament, Triumph and Disaster”, Edinburgh University Press, ISBN 0-7486-1778-7.

Brown K. M. and Mann A. J. 2005. The History of the Scottish Parliament volume 2: Parliament and Politics in Scotland 1567-1707.

Brown K. M. and Tanner R. J. 2004. The History of the Scottish Parliament volume 1: Parliament and Politics in Scotland. 1235-1560.

Tanner R. J. 2001. The Late Medieval Scottish Parliament: Politics and the Three Estates, 1424-1488.

Database Design: Timely Prosecution Services

Introduction

Timely Prosecution Services (TPS) is designed to take care of the requirements cited. TPS will have information of judges and their cases, defendant information, prosecution counsel information, defending counsels and multiple defendants are also maintained in the system. In order to support all the features that are required by the system, a detailed database design was undertaken. This has been produced as an Entity Relationship diagram below. Every entity has been identified and the relationship it has with the other entities are also identified in the System description part of this report.

The major issues taken into consideration are the queries that come up during usage of the system, which includes the cases under a particular judge or a prosecution counsel. Similarly, choice of querying has been provided to check the number of cases pending on a defendant and the number of defendants in a specific case. All these combinations have also been taken care of.

System Description

The following entities were identified in this project.

  1. Judges
  2. Defendants
  3. Defense Counsels
  4. Crimes
  5. Court
  6. Prosecution Counsel
  7. Case

Every one of these entities are related to each other and the relationship between them is brought out in the following ER diagram. Only one judge is assigned to a case whereas more than crime may be present in a case. So is the relationship between the case and the defendant and from the defendant to the defense counsel. However, there is only one prosecution counsel and court allocated for every case. This is brought out in the ER diagram.

In order to ensure that the values pertaining to the zones or states are visible, when queries every case is recorded along with the state it is from. In this case, these are North, South, East and West. This information is also saved in the Case Schema. In every one of the tables, a primary key has been set to pick up random information. The primary key is indicated in the ER diagram as mandatory keys.

In the case of child tables which have foreign keys, it can be found that the two or more foreign keys join together to form the primary key for that table or schema. Since the defense counsel table could have multiple counsels for every one of the defendants in the case, the defense counsel is a child to the defendants table. The primary key for the tuples is a combination of caseid with defendant id along with defense counsel id. Every table has an unique id for locating the data without losing the uniqueness of the information.

ER Diagram for the TPS system

While making the ER diagram the following considerations have been done:

  1. Judge to case is a one to one relationship.
  2. Case to crime is also an one to many relationship
  3. Case to defendant is also an one to many relationship
  4. Defendant to defendant counsel is an one to many relationship
  5. Case to court is a one to one relationship
  6. Case to a prosecution counsel is also an one to one relationship.

All these criterion have been brought into the ER diagram shown below.

ER Diagram for the project.
Figure 1: ER Diagram for the project.

The entity relationship diagram indicates the entities and the relationships between them in addition to the attributes that are used. Based on the ER diagram a relational database schema has been created. The same is pictured below.

Additional constraints

Every prosecutor can have more than one case to handle. And similarly, there can be one defense counsel who can be defending more than one defendant in a case. Every defendant will have a defense counsel but there is no need that every defense counsel should be handling only one defendant in a case or in multiple cases. These are also considered in the design.

Relational Database Schema

Based on the entities and the relationships given above the relational database schema can be deduced as follows:

Schema Names

Schema.
Figure:2. Schema.

Case Id is the foreign key in most of the schemas referring the information. In case of the defense counsel alone there are two foreign keys which are used to refer to the schema rows.

Case Id is present in all the sub schemas to ensure that the link between the schemas exist all through the database. For every case, the needed information is related to one another. However, when a query on the number of cases a judge is handling is not a direct query. If the same is a routine query then a view may be set so that there is fixed link between the tables. This would also ensure that the information required is swiftly obtained without any delay in getting to it. For this purpose, the following queries have been identified:

  • Regionwise cases pending and case information. For this purpose, a view connecting the regions is created. This would bring together data pertaining to one single region. In the case details, the case status of True or False indicates whether the case is active or inactive. If it is false, the case is inactive and the same is not considered for the purpose.
  • Cases handled by a given prosecutor can be efficiently queried if there is a view that would link these factors alone. A query has been created and a view created to handle the same case.
    • SELECT Prosecution.*, Cases.*, Court.CourtID
    • FROM (Prosecution INNER JOIN Cases ON Prosecution.CaseID = Cases.CaseID) INNER JOIN Court ON Cases.CaseID = Court.CaseID
    • WHERE ((“courtID”=inputnumber));
  • The most common types of crimes can be identified by using a suitable query and a view is not created for this purpose.
  • To know the status of the cases with the judges, a view is created linking the judges and to identify the most adjourned cases with a particular judge, the field, number of adjournments in a case is also used. This would help in identifying the number of adjournments given and the cases that were closed are identified using the active or not active flag in the case table.
  • State with the largest number of crimes in a given year.
    • SELECT Crimes.CrimeID, Crimes.CaseID, Court.State, Court.CourtID, Court.CaseID, Court.FromDate, Crimes.CrimeID
    • FROM Crimes INNER JOIN Court ON Crimes.CaseID = Court.CaseID
    • WHERE (((Court.State)=”MN”) AND (Year(“fromdate”)=2004) AND ((Crimes.CrimeID)=Max(Count([crimes].[crimes]))));
  • The most common type of crimes in a given state
    • SELECT Court.State, Court.CaseID, Crimes.CaseID, Crimes.CrimeID
    • FROM Crimes INNER JOIN Court ON Crimes.CaseID = Court.CaseID
    • WHERE (((Crimes.CrimeID)=Max((Count([crimes].[crimeID])))));
  • Most number of adjournments
    • SELECT Cases.CaseID, Cases.Description, Cases.Noofadjnmnts
    • FROM Cases
    • WHERE (((Cases.Noofadjnmnts)=Max([cases].[noofadjnmnts])));
  • Judges with more than 10 adjournments in a given year
    • SELECT Cases.CaseID, Cases.Description, Cases.Noofadjnmnts, Judge.JudgeNo, Judge.Name
    • FROM Cases INNER JOIN Judge ON Cases.CaseID = Judge.CaseID
    • WHERE ((Max([cases].[noofadjnmnts])>10));

The schema for the judge – case join is as follows:

Judges – Cases View CaseId – Join created on this key CaseDescription Zone Status T/F Adjournments
JudgeId Name

The schema for the prosecutors query is as follows:

Prosecutor – Cases View CaseId – key used for creating Join CaseDescription Zone Status T/F Adjournments
ProsecutionId Name

By using the above two views, most of the pending cases queries and the cases that are pending with a specific judge can also be found out. Similarly, the query on this view can also provide information on the number of cases closed by a judge. This will help in identifying how busy the judge was during the last few years. There could also be cases to know in which of the regions a defendant has a case pending on him. In those cases also an appropriate query can be created. The view created for the purpose is given below:

Defendant –Cases Join CaseId – key on which join is created CaseDescription Zone Status T/F Adjournments
DefendantId Name

Filter is created on the Zone and on the status of the case, if the requirement is to know for a specific zone. The sort order is decided on the basis of the requirement either in the defendant order or if the filter condition is for a specific defendant and sorted on a zone wise, then the query is filtered on the specific clause and ordered by using the zone.

Additional queries and views can be created depending upon the need that comes up from the users.

Screen Shots

Opening menu screen.
Figure 3. Opening menu screen.

An opening menu was created to ensure that the user is able to comfortably work on the system. This comprises of the data creation in two levels. One when the master is created. This has the master data on the judges, prosecution counselors and other fixed information like the court details. This is done in one of the main options, termed the master data creation. The second option is the option to create the case details and the crime details. These two are variable ones and may carry information pertaining to every one of the master details given already.

A query menu and another one for the reports is also created. While the reports would be available on the printer, the queries will appear on the screens. The information is obtained by querying the database and the same is presented subsequently to the user. The query is formed using the views and the schemas already discussed.

Case Details entry screen.
Figure 4: Case Details entry screen.

The master entry screens are simpler and comprises only of the fields that have to be entered to complete a tuple in the schema for every one of the master tables. In the case of judges, the judge number, name and the case that they deal with are taken into consideration. There can be multiple entries for the same judge. But together with judge number and the case number there cannot be duplicates. Both together form the unique key for the table. Similarly, for the prosecution counselor too, prosecution counselor id along with case number will become a unique key whereas prosecution counselor id alone is not unique and will have multiple records. This is prompted by the entry screen while doing data entry. In the case of case details entry screen, it is more complex.

For a single case, there can be multiple defendants and for every defendant there can be multiple defending counsels. This is allowed by using the table structure and the frame that allows multiple entries in the screen for the defendant. The same way, the crime is also enterable multiple number of times. This meets all the conditions that are needed for the purpose. The rest of the information in the screen occurs only once in the case.

Typical Query result screen.
Figure 5: Typical Query result screen.

The typical query results screen is shown above. This is thrown up when the information is collected from the query screen and the results of the query is displayed in this format.

Improvements and Suggestions

There are a number of improvements and suggestions could be built in, in the software. There could be a number of additional fields that would maintain judgment details. The queries can also be based on the judgments so that if at any point in future if archives have to be referred for a specific judgment then the same can be used.

Conclusion

A design and implementation of a data base has been done. Oracle back end was employed for the purpose. The ER diagram has been presented and the details have been analyzed and presented. In addition, a few improvement and suggestions have also been presented. Distributed data management has been done by providing for the region wise accounting of the cases and querying information pertaining to various regions and comparing the results.

References

  1. Alan Beaulieu, 2005, Learning SQL. O’Reilly.
  2. Candace C Fleming & Barbara Von Halle, 1989, Handbook of Relational Database Design. Addison-Wesley Professional.
  3. Michael J Hernandez, 2003, Database Design for Mere Mortals. Addision Wesley Professional.

Do Video Games Cause Violence?

The debate whether video games cause violence is raging particularly in the United States where an outstanding 60 percent of Americans play these games habitually, out of which 32 percent are above the age of 35 (Physorg.com).

Those in favor of violent video games contend that fantasy and violence are integral parts of our life, which we tend to manifest in the form of storytelling. Video games therefore express our dreams which are purely figments of our imagination. Such games are nothing but communicated thought. For example, a person who dreams of flying will mentally perceive the fantasy quality of video games like ‘Lost Planet’; a person who wakes from a dream involving scary things will mentally perceive the emotionally purging quality of video games like ‘Grand Theft Auto’ . In both cases, the imaginary scenes provide us mental impressions and involvement without compelling us to undertake daring or risky feats such as leaping from a high point. Video games are not tools that have the ability to transport our imagination into the real world; there are merely weak imitations of the genuine, providing help and relief to those who are apprehensive and unable to comprehend this selfsame world. Distorted individuals do not play video games to gain or increase knowledge of violence – the violence is already inherently present in them. Accusing video games of inciting violence is similar to declaring that one who reads the Bible will only do good deeds and is incapable of bad actions.

Another argument in favor of video games is that a long-term study conducted in June 2005 by acknowledged expert on the influence of video games playing, Dmitri Williams, found that the players’ “robust exposure to a highly violent online game did not cause any substantial real-world aggression”; the players neither increased their argumentative behaviors after game play, nor were significantly more likely to argue with their friends and partners (Physorg.com).

Those who condemn violent video games contend that playing such games stimulates the brain for aggression. A chilling example is the 2003 fatal shooting of 3 persons by 16 year old Devin Moore in Fayette, Alabama – an action reportedly caused by Devin’s unnatural obsession with the GTA video game that he played constantly (Smith). In their first argument, those against violent video games cite a 2001 study of college students in the United States. A group of students who first played an aggressive video game, and then confronted each other in a feigned challenge, showed more aggressive behavior towards each other as compared to another group of students who played a non-aggressive video game (Medical College of Wisconsin).

The second argument against violent video games is based on solid scientific information. In a study conducted in May 2005 on 13 subjects aged between 18 and 26 in Germany, a functional magnetic resonance imaging (fMRI) system provided images that showed the subjects’ brain displayed “large observed effects,” which is a feature of aggressive thoughts; the researchers, led by Rene Weber, stated this finding is neurobiological proof that violence in video games does spawn aggressive repercussions such as aggressive knowledge acquisition, aggressive results or aggressive actions (Kanellos).

In conclusion, I find the arguments against violent video games more potent that the ones in favor of such games. I feel my stand gains strength from Dmitri Williams’ contention that his study findings were not totally convincing, admitting there was a need for more long-term studies until which time it would be inappropriate to make strong predictions (Physorg.com). The hype over violent video games has reached such a level that even sex workers have warned against the threat of violent video games against children, specifically targeting GTA ‘San Andreas’ in which rape is not only strongly inferred, but the prostitutes in the game are also in danger of being killed by the protagonist (Haines).

References

Biggs, John. “.” Crunchgear.com. 2007. Web.

Haines, Lester. “” The Register. 2006. Web.

Kanellos, Michael. “Violence in Games Stimulates Brain for Aggression.” CNET News.com. 2005.

.” Physorg.com. 2005. Web.

Smith, Tony. “Grand Theft Auto Firm Faces ‘Murder Training’ Lawsuit.” The Register. 2005.

“Video Games: Violence & Broken Bones.” Medical College of Wisconsin. 2001.

Nanotechnology Risk in a Nanogenerator

A nanogenerator is a device that utilizes the semiconducting and piezoelectric properties of zinc oxide nanowires. These piezoelectric nanowires are used in the conversion of mechanical energy into electrical energy (Yang et al. pg 1). The nanogenerator produces an electric flow that is continuous from ultrasonic waves. The electric flow that a single nanowire produces can even go up to four watts per centimeter (MLO pg 1).

A nanogenerator benefits from the semiconducting and piezoelectric characteristics of zinc oxide nanostructures which are able to produce micro electric charges when the nanostructures are flexed. The piezoelectric property of zinc oxide transforms the mechanical strain to polarization charges that create a piezoelectric potential. The electrode-nanowire interface has a schottky barrier that directs the electron flow using the piezoelectric potential influence (Yang et al. pg 1).

The design of the nanogenerator was formulated to tap energy from environmental sources which include mechanical vibrations, blood flow and ultrasonic waves. Several approaches aimed at demonstrating nanogenerators have been presented, with the most popular being single wire generator (SWG). This SWG is made up of single zinc nanowires with its ends attached to metal contacts. The zinc nanowires lie on a substrate that is flexible (MLO pg 1).

In the context of this essay zinc oxide is used in nanogenerator which acts as a transducer in the conversion of mechanical energy to electrical energy. Global use of zinc oxide exceeds 1.2 million tons every year. Other uses of zinc oxide include its use in rubber manufacturing to increase the elasticity and strength of rubber. Zinc oxide is also used in concrete manufacturing, anti-corrosive coatings, and cigarette filters.

Pharmaceutical uses of zinc oxide include treatment of irritation and minor burns. Zinc oxide is also of great importance in the manufacture of sunscreen lotion because of its ability to absorb ultraviolet radiation thus eliminating the damages caused by the UV radiation as well as acting as a skin protector from sunburns. According to research conducted to observe exposure related behaviors in humans, it concluded that zinc oxide was not a skin irritant (Occupational Safety & Health Administration para 3).

Human exposure to zinc oxide can occur in various ways, most commonly through inhalation, ingestion, eye or skin contact. Certain operations involve zinc oxide and thus may lead to those conducting the operations to be exposed.

These operations include cosmetics use, use in food additives, photoconductors, seed treatments, photoconductors, and in color photography (United States Department of Labor pg 1). Zinc oxide is believed to affect the reproductive system and the lungs in experimental animals (United States Department of Labor pg 1).

A study to establish the zinc oxide and other sunscreen nanoparticle skin penetration in humans indicated that there was limited penetration, approximately 0.03%, of zinc oxide through the epidermis. Observation by an electron microscope showed that no particles were observed in the stratum corneum indicating that there is limited penetration of nanoparticles through the human skin (Global Nanomaterials Safety pg 13).

Research on the distribution of zinc oxide on the human skin by the utilization of different techniques, that is, multi photon microscopy (MPM) used in a combination with scanning electron microscopy (SEM) and energy dispersive x-ray (EDX) was conducted.

The research indicated that zinc oxide nanoparticles accumulated into skin folds in the stratum corneum. And in some cases it accumulated as hair follicle roots. Considering the poor penetration zinc oxides have through the startum corneum, it suggests that zinc oxide penetration through the skin is not likely to cause any health concerns (Global Nanomaterials Safety pg 13).

A study on the possible acute toxicity of zinc oxide was conducted using adult healthy mice as a specimen. The study showed that there is little variation in the toxic effects of 20nm and 120nm of zinc oxide. There are indications that a relationship exists between the size of the zinc oxide particles and the effect. The research however concluded that the particles of zinc oxide are not toxic. Acute zinc oxide exposure is believed to cause respiratory irritation, nausea, fever, vomiting, chills and coughing (United States Department of Labor pg 1).

Analysis on the possible effects of repeated dose toxicity of zinc oxide through inhalation was conducted in animals. When rats inhaled zinc oxide for five days, local lung inflammations were observed. These lung inflammations were observed by alterations in certain parameters in histological examinations and the bronchoalveolar levage fluid (BALF).

Apart from this inflammation, draining of the lymph nodes was also observed. The effects were reversible in a certain recovery period and they were related to the concentration inhaled. Chronic zinc oxide exposure through the skin is believed to cause skin “papular-pustullar skin eruptions” in the pubic regions, scrotum, and inner thigh and in the inner arm (United States Department of Labor pg 1).

The effects of zinc oxide exposure on the cell morphology were drastic within the first 24hours of exposure. This was more pronounced when the concentrations inhaled were higher than 50micrograms per milliliter. This was witnessed by the cell shrinking and attaining an irregular shape. At higher concentrations of about 100micrograms per milliliter, the cells become detached and necrotic. However, concentrations less than 10micrograms per milliliter caused no observable change on the cells (Wan-Seob pg 4).

When exposed to approximately 50-100 micrograms per milliliter of zinc oxide, an estimated 15-50% of the cells died as indicated using the trypan blue dye technique. Concentrations less than 25micrograms per milliliter of zinc oxide did not cause any significant effect on the cell viability. The permissible exposure limit as set by the Occupational Safety and Health Administration is 15milligrams per cubic meter for zinc oxide (United States Department of Labor pg 1).

A 24 hour exposure of 100micrograms per milliliter lowered the functioning of the mitochondrion by over 80%. These results indicated also that the toxicity of zinc oxide was much higher relative to nanoparticles from other metal oxides. Measures that are aimed at controlling zinc oxide exposures include; exhaust ventilation, wearing protective equipments, process enclosure and general dilution ventilation among others (United States Department of Labor pg 1).

Exposure to inhaled zinc oxide is determined by using a polyvinyl chloride filter followed respirable fraction sampling using a 10mm nylon cyclone. Collection of sample is done at a flow rate of 1.7liters per minute (respirable fraction) on the upper limit until a collection of 816 liters on the upper side is achieved.

For sample collection on total dust, collection is conducted at a flow rate of 2.0 liters per minute on the upper side; collection is continued until a total of 960 liters is achieved. Gravimetric techniques are used in the analysis procedure (United States Department of Labor pg 1).

Absence of epidemiological data is catered by research results obtained from animal studies. Risk management strategies to reduce exposure time and concentration include emergency planning requirements. Hazardous wastes should meet reportable quantity requirements. The employers should give annual submissions of the quantities of zinc oxide releases in their facility as an attempt at informing the community.

The workers should take it as their responsibility to observe respiratory protection policy which includes the conditions for the use of a respirator and the guidelines of the protection program. Selection of clothing and equipments to be used for personal protection by the workers should be conducted carefully. Frequent evaluation of protective clothing should be conducted to establish the effectiveness of these clothing in stopping dermal contact.

Ultrafine particles are very small, about 100 nanometers, and they result from activities such as cleaning, cooking, operation of consumer appliances and smoking tobacco products among other activities. Health risks occur as a result of ultrafine particles.

The small particles pose more risk because they display a greater proportion of their atoms since their surface area is large. Ultrafine particles can either be characterized as natural or anthropogenic. Natural sources of UFP’s include forest fires, forest fires, viruses, biogenic magnetite among others.

Anthropogenic sources which are human generated sources can be categorized as intentional and unintentional. Nanoparticles fall under the intentional anthropogenic sources while unintentional anthropogenic sources comprise jet engines, frying, grilling, metal fumes, and incinerators among others. Excessive exposure to UFPs can cause oxidative inflammation of the lungs and as a result can act as a tool in catching infections like pneumonia, asthma, chronic bronchitis among others (Air Quality Sciences pg 1).

Works Cited

Air Quality Sciences. Ultrafine particles why all the concern about something so small? n.d. Web.

Global Nanomaterials Safety. Toxiological review of nano zinc oxide. PROSPECT: Global Nanomaterials Safety. 2009. Web.

MLO (Medical Laboratory Observer). “New technology.” MLO: Medical Laboratory Observer 39.7 (2007): 68-68.

. Occupational safety and health guideline for zinc oxide. United States Department of Labour, n. d. Web.

Wan-Seob, Cho et al. “Metal oxide nanoparticles induce unique inflammatory footprints in the lung: important implications for nanoparticle testing.” Environmental Health Perspectives 118.12 (2010): 1699-1706.

Yang, Rusen, Qin Yong, Li Cheng, Dai Liming and Wang Zhong Lin. “Characteristics of output voltage and current of integrated nanogenerators.” Applied Physics Letters 94.2

Human vs. Machines in Factories Over the Past 15 Years

Human beings have been in constant competition with machines in order to win their place in the process of production since the rise of the modern (McCarthy and McGaughey 1989, p. 3). It is certain that all types of production involve some type of human labor performance. Labor is the method by which the human mind transforms its designs and aims to matter. It is man’s application of his bodily and mental faculties for the purpose of altering matter and thereby making it serve a further end (Reisman, 1990, p. 131).

The technological advances effects on income and employment have been debated and argued about since the time when the despaired workers of the textile industry in Nottingham, England, destroyed the recently invented machines for knitting that did not require the human presence and threatened their income and way of life in general. The high rate of economic growth of the western capitalist economy resulted in a fast rise of labor and capital incomes and also caused a progressive shortening of the workday. At the beginning of the 20th century, the number of workers that were busy in various kinds of active production as well as their wages was rapidly increasing, however, the annual labor hours amount was pretty stable. At the end of the forties, the number of workers that were involved in the production industry still continued to grow, although their rates of real wages increased quite slowly.

During the recent 15 years, the rates of unemployment tended to grow in the United States, as well as in multiple less-advanced countries. It appears that the salary gap between the upper and lower class groups is widening. This effect seems to have consumed the whole nation, and it was suggested that it is the result of technological change acceleration.

The replacement of humans by machines, or in other words, the automation of the manufacturing process, tends to speed up the production aspects that were slowed down by the human presence. The fewer human beings are required for performing the production, the more rapid this production industry becomes. The present rate of growth of the United States economy depends rather on energy sources than on human labor. Human labor cannot be completely abandoned however it is possible to minimize it, and where there is no such possibility, human labor is forced to keep at the same pace with the production rate that outruns all known predecessors.

The age of automation began to arrive in the United States during World War 1. For example, before the war, the US textile industry was completely dependant on the German imports of chemicals and dyes. When the war with the Germans had caused a deprivation of those imports, there were not many skilled workers in the US in order to establish their own production of those chemicals. Therefore the United States was instantly forced to establish its own chemical industry. They had to suddenly replace human labor with machinery. The German chemical workers received their training through observing their parents. They studied the art of dye manufacturing by observation of the liquid color. The newly opened plants in America implemented a continuous-flow process, managed by sensitive measuring instruments that were automatic and specifically designed for that purpose. Many other industries have later gained the sophisticated machinery that eliminated their need for employing highly skilled workers. This is considered to be the arrival of the automation age (Leontiff 1995). But only when half a century later the electronic computing devices were invented and adopted in order to permit machines to carry out not just accurate observations but also what is now called automatic thinking and reasoning. Currently, in the United States, the change of technology has become closely connected to fast scientific knowledge advancement. If we were to look into the future, we could expect further replacement of the most skilled manufacturing workers by a vast number of production means.

One may wonder what will be the consequences of these changes from a social and economic point of view within the next decade. It is certain that with the implementation of automated production, the production output of goods will rise. However, this rate will not be as rapid, as within the last fifteen years, because of today’s awareness of a need to protect the environment and save the necessity to preserve the natural resources. Right now, the national income distribution between human labor and automated production is determined; however, the demand for human labor already tends to have ceased increasing, and later on, its wages will start to fall, even for workers that are highly trained and receive good salaries. Consequently, the income of the companies that own the natural resources and means of production will rise. What is now called “technological unemployment” shall weaken the trade unions in their ability to raise real wages over a certain competitive level.

The acceleration of the technological changes in manufacturing, and particularly the production automation, will not lower the equal national income distribution by the government. In fact, in the long run, this function of this institution shall increase.

Throughout many centuries, the need for labor was perceived as an unavoidable burden, except for the minority that was privileged to command servants. Only this minority possessed the luxury of real leisure, meaning that they had no necessity of being deprived of their energy through exhaustion. The tendency of the last fifteen years is offering us a future where the requirement of human labor will gradually vanish up to a certain point, where the majority will not be able to be occupied, abandoning their talents and available energy. Leisure will be more prevalent among the majority of the population up to the point of total boredom and idleness. Furthermore, those individuals that will be able to accomplish meaningful work shall make up a privileged minority. At the same time, the idle majority will be burdened with their free time to kill.

The Industrial Revolution, for the first time in history, caused the replacement of heavy animal and human labor with machines, providing highly increase mobility to humanity. The rate of the Industrial Revolution was constantly accelerating, and with the arrival of electronic technology, this rate became explosive. Smart machines that were equipped with highly sophisticated intelligence, especially during the last fifteen years, have performed the tasks required by human society production and service more and more. In the technologically advanced nation, like the United States, the new developments of manufacturing have become very familiar to the population and do not require any further elaboration. However, a large part of humanity is still in the experiencing early stages of technological, industrial integration. Up to the present time, the work was able to sufficiently keep more or less acceptable employment levels. However, in the nearest future, society shall experience an inevitable unemployment level rise. This is especially true for advanced societies, like the United States (Muller 1997). Because of the automation of manufacturing and lower levels of jobs in the factories, much can and is being done to keep the people employed. Some manufacturers permit job sharing between two or more people in order to shorten the working hours. Others increase vacation and holidays or implement other ways of spreading more widely the opportunities of employment. But despite all the measures taken by companies that are using machines in the workplace, the unemployment rates have been increasing for the last fifteen years and will increase dramatically within the next decades, as there is a reduced need for work.

Currently, the above scenario has an explicit standard economic objection. One might argue that the absolute scarcity problem was solved in much of the West thirty years ago. But currently, people are progressively transforming their “wants” into “needs,” and this causes the creation of “artificial” consumer society demands, which appear to be almost infinite. It is pretty much evident that this process will last for a long period of time, and if people are ready to spend money on another one of their “needs,” whether it is some item or service, the jobs required making whatever thing they need or fulfill that service will exist. This will create a higher level of manufacturing automation. However, there will also be more demands and consequently more jobs that these demands will create.

Nevertheless, this argument has a weak side. Its problem is that it underestimates the information technology revolutionary impact on manufacturing. This impact consists of a future where machines will be made by other machines and where there will be an exponential progression of machines substituting human labor. In fact, over the last fifteen years, such a tendency was already clearly observed. During this timeframe, in the most advanced countries, including the United States, smart machines have overtaken the labor market. From one side, we are dealing with an increasing number of machine builders, caregivers, and symbol manipulators, and from the other side, an enormously high number of fast food and hospital laundry workers. This fact is a dramatic national social issue in the US at the current time, and in the future, this issue will become more acute with the dynamic advancement of manufacturing technology. This will result in a much more rapid disappearance of the jobs at the lower position on the sophistication scale, compared to those that are on its upper end.

A good example of the aforementioned tendency is the United Auto Workers, which is an essential player in the United States automotive manufacturing industry. Disregarding the management-union relation that commonly experiences high levels of distrust, resistance, and suspicion from both sides, there is a history of reasonable cooperation and shared interests recognition. However, it is clear that management attempts to make the production system “idiot-proof” accompanied by the union’s attempts of protecting their members brought to elaborate and bulky work rules and job classifications. Most manufacturing employers see this as a huge barrier to effective workforce deployment and, therefore, a block of productivity. In the conditions of a protected market, the industry governing rules of production for manufacturers as well as for suppliers are suitable for large volume production. Manufacturing equipment should never stay idle, and there must be constant maintenance of inventory in order to ensure the product’s continuous flow. Various questions, such as the quality, effectiveness, and cost, are all giving way to production obligation. Human labor is not allowed to interfere with production, and thus the jobs should be designed as basic as they can be, also making easier the substitution of one worker for another. When a certain job requires a specific human skill, it is a clear responsibility to a designated worker. The automotive manufacturing industry now followed the principle of “if it ain’t broke, do not fix it,” and as long as vehicles roll out of the factory’s doors, it was difficult to claim that anything is “broke” (Flynn and Cole, 1988, . 93).

In order to prevent the rise of unemployment rates in the United States, employers should cardinally change their policy towards human labor during the wide-scale integration of machines into the manufacturing industry. The accomplishments and strategies of the Japanese automotive industry might be a good example to follow for Americans. The governing themes of Japanese manufacturing are improvement of quality and reduction of waste. The use of automated manufacturing equipment is frequently in the second place. Japanese manufacturers observe human labor as a highly valuable and important resource that requires constant development and nurturing. The manufacturing job positions are designed in a way that enables flexible assignments, requiring the development of multiple skills, unlike the US manufacturing industry that induces the skill reduction caused by the substitution of human labor by machines. Furthermore, this strategy of the Japanese manufacturers drives the companies towards excellence and causes the spread of their manufacturing. Such perfection pursuit should be acknowledged and adopted by the US manufacturers of not just automotive, but any other industry, as this strategy will bring enormous potential dividends.

References

  1. George Reisman, Capitalism: A Treatise on Economics [book on-line] (Ottawa, IL: Jameson Books, 1990), p. 131.
  2. Eugene McCarthy, and William McGaughey, Nonfinancial Economics: The Case for James W. Cortada, The Digital Hand : How Computers Changed the Work of American Manufacturing, Transportation, and Retail Industries / [book on-line] (New York: Oxford University Press, 2004).
  3. Michael S. Flynn, and David E. Cole, “3 The U.S. Automotive Industry: Technology and Competitiveness,” in Is New Technology Enough? Making and Remaking U.S. Basic Industries, ed. Donald A. Hicks [book on-line] (Washington, DC: American Enterprise Institute, 1988), p. 93.
  4. Shorter Hours of Work [book on-line] (New York: Praeger Publishers, 1989).
  5. Steven Muller, “Time to Kill,” The National Interest, 1997.
  6. Wassily Leontiff, “The Long-Term Effects of Technological Change,” Challenge 38, no. 4 (1995).

Managing Pilot Fatigue

Introduction

Most of the aviation accidents have been greatly attributed to pilot fatigue (Mohler, 1998, p. 1). This is because a pilot’s input into the aircraft is majorly dependent on his consciousness on the job. Therefore, in case one is fatigued, then this provides room for error in inputs (Smith, 2008, p. 1).

Even though the technological advancements in flight systems of the current aircrafts have preventive mechanisms, without sleep and freshness, small effects of fatigue can greatly jeopardize flight safety. This is so because the duties performed by the pilot in the cockpit require a lot of vigilance, care as well as mental and physical well being.

Effects of Fatigue on Pilot Performance

There is a strong correlation between pilot fatigue and vulnerability to the pilot error. One of the effects that are rampant among fatigued pilots is cognitive fixation. This is the narrowing of attention that causes decreased concentration which numbs the pilot’s ability to multitask which is necessary skill in the aviation field.

This inability to perform the necessary tasks at the same time neglects other important aircraft functionalities and may lead to malfunctioning of the aircraft which may cause a crash. Therefore, fatigue can highly lead to loss of the pilot’s situational analysis. (Jackson and Earl, 2006, p. 1)

Fatigue also reduces the vigilance and alertness that is required by the pilot thus degrading situational analysis awareness. This causes reduced communications between the crew and the support team and this lack of coordination may have very negative effects on the airline’s crew resource management and may lead to loss of jobs as well as reduction in cadre levels of pilots (Printup, 2000, p. 1).Fatigue also leads to inconsistent performance by pilots which may put their careers in jeopardy as well.

Fatigue among pilots can also be attributed to their inability to recall information which may be crucial in certain circumstances. This memory deficiency may lead a pilot to forget some of the important Air Traffic Control procedures and information thus putting the lives of many at risk (Mohler, 1998, p. 1).

This causes cognitive slowing which results to the pilot’s inability to collect information, analyze it as well as integrate it efficiently. This also leads to impaired logical reasoning, impaired judgment as well as inability to make concrete decisions (Jackson and Earl, 2006, p. 1).

Fatigue also leads to degradation in flying because the pilot’s perceptual abilities are impaired. It also causes reduced visual perception, loss of initiative and effort, vulnerability to plan continuation error in cases of impaired ability in recognizing a deteriorating situation in the aircraft and may often lead to depression (Printup, 2000, p. 1).

Managing Pilot Fatigue

In combating pilot fatigue, it is important to understand that pilots have irrevocable schedules and need more comprehensive fatigue management strategies that are in harmony with their schedules. It is also very important to note that most of the pilot fatigues are greatly attributable to the flight and duty time limitations that are mostly imposed by the airline regulators (Mohler, 1998, p. 1).

According to Smith (2008), when the pilots are off-duty, they should consider obtaining at least eight consolidated and uninterrupted hours of sleep on a daily basis, maintain health as well as good balanced diet, exercise regularly, practice stress reduction techniques such as yoga and refrain from work that require heavy physical or mental input.

When pilots are on-duty, they should make sure they alternate periods of activity and relaxation during flight. They should also consume moderate levels or amounts of caffeine if desired and food as well as water as regularly as they can so as to provide the body with the required energy levels which prevents the onset of fatigue that is usually caused by low blood sugar levels as well as dehydration (Mohler, 1998, p. 1).

They should also engage in preplanned naps in the multi pilot cockpit environments so as to refresh their brains but this should be in conformity with the airline rules and policies.

Conclusion

If a pilot is not in a position to avoid duty when is fatigued, Jackson and Earl (2006) suggests that one should eat high protein foods coupled with a lot of water for it temporarily holds fatigue at bay, caffeinated beverages though moderate levels help in enhancing alertness and most of all making conversations with other crew members, making rounds and stretching is therapeutic enough to edge-off fatigue.

But in a situation where a pilot may not feel confident in his ability to fly due to fatigue, regardless of the schedules, one should not fly. This is to avoid putting the lives of many people at jeopardy including the legacy or image of the company on the spot in the event of an accident (FAA, 2009, p. 1).

Reference List

Federal Aviation Administration (FAA). (2009). . Pilot Safety. Web.

Jackson, A. and Earl, L. (2006). . Oxford Journals. Web.

Mohler, S. (1998). Pilot Fatigue Manageable, but Remains Insidious threat. Human Factors & Aviation Medicine: Vol. 45: 1. Web.

Printup, M. (2000). The Effects of Fatigue on Performance and Safety. Airline Safety. Web.

Smith, B. L. (2008). Pilot Fatigue Detection Using Aircraft State Variables. West Virginia University: College of Engineering and Mineral Resources. Web.

Information Systems Management. Bead Bar Network

Introduction

Bead Bar is a company that was founded by Meredith in 1998. Its first branch was in New Canaan, Connecticut. The basic idea behind this was to let customers create their bead jewelry through their innovativeness and creativity. The designers use locally available materials such as wires and stones to make the jewels. The Bead Bar has developed to have three divisions that consist of studios, franchises, and Bead Bar on Board.

The studio division’s main objective is to manage the six bead bar studios that are located in New York City (two), and one each in Long Island, Washington D.C., and Boston, Massachusetts. The franchise division’s main objective is to supply and sell a bead package to any business that intends to start its bead bar studio. There are five franchises, Missouri; Chicago, Miami, Florida Illinois; Los Angeles, California; Seattle, and Kansas City. The Bead Bar on Board is a portable Bead Bar designed especially for cruise ships. The Company is composed of 15 full-time employees and 20 part-time employees. (Malanga, 2005)

Background

Due to diverse divisions this company has grown and the paper-based system has had many shortcomings such as lost orders, incorrect invoicing, and supply delays (Malaga, 2005)

This company would benefit a lot if an information system is installed. This will keep up-to-date and accurate information on orders, products, inventory, and account activity. Through this, there would be easy access to information on stock, supply and ensure the confidentiality of the customer. This will make it easy for decisions to be made and incorporated. (Malanga, 2005)

Recommendations

There are always the advantages and disadvantages of any kind of network. But, the most important issues to consider are reliability and its capabilities. The cost of its development and the cost of running the kind of network topology and its architecture should also be brought to deliberations. The divisions are wide and far, hence a kind of network is required to enable all the divisions to communicate with each other.

The kind of equipment and software that are available and those that can be acquired should also be placed into considerations: their licenses and mode of use. The speed of the network and the amount of information that it can carry should also be placed into consideration. The security and the confidentiality of both the company information and customer should be brought into thought. (Jeff, 2005)

Network Topology

A network topology is a way that devices are laid out and connected to a network. In the Bead Bar case, the mesh topology is the most preferred. This kind of network topology is cost-effective and suites all the conditions of the directors. Mesh topology allows information to travel in both directions hence transmit data forth and back. (Marangu, 1999)

The Bead Bar divisions are located in different places in the United States and the data send should be able to move fast and the information should be efficient. Since there is no hub, this kind of topology is highly reliable. Data transfer should be fast and this is solved by mesh topology.

Network Architecture

Client/server architecture is the best for Bead Bar’s Network since it facilitates WANs that will work under TCP/IP. This assigns computers to provide process requests, act as a client and provide specific services. The client/server is powerful and multiple requests can be made simultaneously. It also stores information from the client which can also be shared. (Marangu, 1999) Since it works under TCP/IP protocols there is confidentiality on the information of the client and the integrity of the company data.

Network Advantages and Drawbacks

A strong and reliable kind of networking system is important and this is met by the mesh topology. The mesh topology’s speed and efficiency of communication are important to this company because of the distances between the locations. The computers are also connected, this fulfilled the condition of the directors for each computer to be connected with the other. There is no hub and this increases the speed and efficiency of transmission. No drawbacks are associated with this topology. The client/server architecture restricts the company information and access to important company data. If a branch requires some information they have to request the information through the proper channels. (Jeff, 2005)

Conclusion

An information system would improve the bottom line for the Bead Bar company by increased revenue from reduced invoice error, filling orders promptly, reduce the number of employees needed to process paperwork as the company expands, ensure supply meets demand, faster product and revenue turnaround, and add the potential for expansion due to greater customer satisfaction and increased internet sales.

Based on all networking recommendations that have been made, the Bead Bar will be able to run its business more efficiently. With proper networking, now the database can run properly. The executives at the Bead Bar will be quite pleased once they see results from the changes because the business will grow as a direct result of increased communication because of the networking topology and architecture. (Malanga, 2005).

Reference

Jeff, K. (2005). Computer Networking. London: Oxford Press.

Malaga, R. A. (2005). Information systems technology. New Jersey: Pearson Education, Inc.

Marangu, J. (1999). World of Networking. New York: New York Press.

Information Systems Management. Network Topologies

The term topology refers to the layout of the connected devices used in a network. It may also refer to the shape or structure of the network through the physical arrangement of the devices that may not correspond to the name or functionality of a particular topology. Network topologies are categorized into the following basic types namely bus, ring, and star topologies (Todd L, 2003).

In a bus topology, devices are connected linearly in a single cable, which acts as the backbone. This single cable acts as the shared communication medium (backbone) for all the devices attached to this cable using an interface connector. In this topology, a device wanting to communicate with other devices in the same network sends a message to the backbone cable, which all the devices see, but it’s only the intended recipient that gets to accept and process the message (Todd L, 2003).

Bus topology often has problems when two devices want to transmit at the same time on the same bus hence. As a result, bus networks systems have some scheme of collision handling or avoidance for communication. This collision avoidance can be either carrier sense multiple access or a bus master which controls access to the shared bus resource (Starlings, W, 2000).

This topology has several advantages including low cost as less cabling is required and it is easy to implement or install. However, it also has some disadvantages namely it limits the cable length and workstation numbers, a cable fault affects all other stations and as the number of workstations increases, the speed of the network slows down. Moreover, if the backbone cable fails, the entire network breaks down (Melissa C, 2003).

In-ring topology, each device is connected with or paired with two adjacent neighbors for communication purposes. Any message sent travels through a ring in the same direction either in a clockwise or anticlockwise manner. A signal is passed through each network card of each device and passed on to the next device. All devices have a cable home running back to the multi-access units (MAU). A message is relayed from one device to another and only the intended device accepts and processes the message (Todd L, 2003).

The main advantages of ring topology are that any cable failure affects only limited users, all users have equal access and each workstation has full access speed to the ring. However, this topology has some demerits for example a failure in any cable or device breaks the loop and can break down the entire network. In addition, it has costly wiring, difficult connections, and expensive adapter cards (Starlings, W, 2000).

Star topology has a central connection point called “hub”. This “hub” may be a hub, switch, or router. All the devices in this network connect to the hub with unshielded twisted pair (UTP) Ethernet. The main advantage of his topology is that a failure in any star network cable will only affect only one computer using that cable and not the entire LAN (Starlings, W, 2000).

In addition, it is easy to add new workstations, has a centralized network/hub monitoring. Like all other topologies, star topology has its disadvantages including any failure in the hub automatically leads to the failure of the entire network. in addition, hubs are slightly more expensive than thin Ethernet.

References

Todd Lammle (2003). CCNA Cisco certified network associate study guide. 4th edition.

Melissa Craft (2003). Faster smarter network + certification. Microsoft press. Redmond Washington.

Starlings, William (2000). Data and computer communication. 6Th Edition. Upper saddle river: NJ Practice hall.

Product Fetch: SprintMusic MP3 Player

The SprintMusic is an MP3 player that uses kinetic energy to operate.

Product Idea

The inception of the idea of the product is from a need of saving energy and finding alternative energy sources. Eco marketing and products are the latest buzz in any industry especially the electronic industry. This product is called the SprintMusic. the concept is very simple of a mp3 player that uses kinetic energy. This small mp3 player will be designed to use the kinetic energy generated when we run or dance and is converted into power to run the MP3. The MP3 is kept in the pocket and another small periphery device is attached to the pant legs.

This connecting device is wireless and is connected through the mp3 via Bluetooth. This the kinetic module of the device attached to the leg of the pant. The connector would transmit energy to the MP3 and it would operate similar to any other MP3 player. The product is unique as it helps in eco-friendly product design. A sample of the design is presented in figure 1.

The product is designed not only to provide music to the listener i.e. the runners but also provides the distance ran and the speed at which the distance has been covered. The sensor placed in the pant legs does not only gather energy from the legs it also monitors the movement of the legs and calculates the distance run by the jogger. It shows three things on completion of the running – (1) distance, (2) speed, and (3) calorie burnt.

The MP3 will have a digital display that would show the desired information on pressing of a button just below the screen. The MP3 also allows a runner to set the desired speed or a range of speed for a running and the MP3 would provide signal when that is speed range is not achieved. Therefore, the player actually plays a soft alarm when the speed is not attained or the runner reduces her running speed.

Therefore, this product has triple features – (1) it is eco-friendly as it uses a unique form of energy i.e. kinetic energy, (2) it provides the distance ran and speed, and (3) allows the user to set the desired speed range and the product would indicate when the desired speed is not attained.

Figure : A sketch of SprintMusic

A sketch of SprintMusic

The usability of the product lies in its unique design and power packed features. This MP3 is specially designed for athletes and sportspersons. Otherwise, this device can also be targeted to eco-conscious customers who look for eco-friendly alternatives.

The easy to use product and its USP of using kinetic energy would allow the marketers to sell it to all health conscious and eco-conscious consumers. The product is unique even its feature offerings that allows the user to see the distance she ran and at the speed at which the distance had been covered. Apart from that it also allows the user to set the speed range at which she wants to run allowing him to maintain the rhythm of running.

Package Design

Figure : Packaging for SprintMusic

Packaging for SprintMusic

Keeping in mind the product’s eco-friendly design, the package is also designed in an eco-friendly manner. The box for the MP3 would be designed with cardboard with no plastic or biodegradable product.

The product would be tied with a string that too would be of natural fiber and not plastic or synthetic material (see design in figure 2). A prototype of the package is provided in figure 3. The product would be presented in the box. The vinyl packaging allows the eco-friendly packaging option for the product allowing for product to carry out its eco-friendly image.

Figure : Prototype of packaging

Prototype of packaging

Practicality

The product is practical and can be marketed to a large number of customers. As eco-friendly and green products are gaining greater predominance and green marketing too has become the trick of the trade today, eco-friendly products are one option that are hard to miss. The MP3 run with kinetic energy helps to avoid wastage of battery and also provides a new way of generating energy for small low power driven devices like MP3. The product is viable as it is easy to use, small, lightweight and handy.

It is small and can be placed in the pocket with the sensor module can be easily attached to the pant legs for generating the energy required to run the system. The product is not cumbersome as the use of wire is minimum and the headgear is the only thing that is wired. Other than that, it also has an option for Bluetooth that would allow users to use Bluetooth earpieces when they want.

Further, the added features provided in the system allows the user to listen to music, save energy and also look at the distance they have run and the speed that they have run, the speed, and the calorie that has been burnt while running the stretch. This is a feasible technology as it is already available in treadmills. However, this device allows users who like to jog or run in the park, road, of ground in and have the convenience of a treadmill in terms of the information displayed in the product.

Digital Divide in the United States

In my opinion, the digital divide is not a central problem in the United States as most of its people would have access to the digital network, and most of the systems, workplaces, homes are all linked to the network since it has reached the developed country status.

However, this may be a major problem to other countries around the world as most countries have yet to reach the developed country status and may have financial problems to ensure that everyone can access the digital world as most countries around the world are still underdeveloped. Thus, a very small percentage of people in the 3rd world countries or in the developing countries actually have access to this digital network. Most people would not have even heard of the digital world as they are poor.

Thus, one of the ways to reduce poverty, as suggested by experts, is by closing the gap of the digital divide. According to them, access to the digital world helps in many aspects of the everyday life of poor people. For example, digital technology will be able to improve rural health and education conditions as people will have more access to the latest medicines, the latest update in the education system, and many others. Apart from that, the digital divide could also possibly help in curbing terrorism as digital technology allows wide communication, and thus people can communicate instead of using violence.

The government can play its role in closing the digital divide by providing a certain amount of funds to support providing digital technology to the poor. The government can also provide subsidies to companies that want to help extend the communication network in these places.

This would encourage other bodies to participate and try to provide digital technology assistance to the rural areas. This will also encourage bright new ideas from the general public and not only from IT professionals. The private sectors could help to expand their technology coverage to the more rural areas to encourage more access to technology. The IT professionals can definitely play a huge role in this issue.

The companies can think of new ways to spread information technology to the people who can really use it, such as the poor people. The government should encourage the IT companies or professionals to give back to the society in order for them to be listed or something like that. Apart from that, telecommunication companies shouldn’t be privatized as it will be then based mostly on profit then expanding the communication network. At least if the government takes over the telephone companies, they would focus on expanding the network more towards the rural areas.

Moreover, solving the digital divide problem could bring in a more positive outcome for the IT industry as well as for the rest of the country. Thus, the IT professionals could also educate the rural people and the poor people towards becoming more aware of the digital world and teach them how to adapt to technology.

They could also teach the people the positive outcomes of this communication network and so on. This would make people be more aware of the communication technology available and, at the same time, encourage them to be interested in it. Besides that, the government of developed countries should also sit down and explain the importance of bridging the gap between the digital divide.

Bibliography

Smith, Craig Warren. Digital Divide.org “Ushering in the second digital revolution”. Web.

Bidgoli, Hossein (2004). “” Web.

Jussawalla, Meheroo & Taylor, Richard (2003). “Information Technology Parks of the Asia Pacific: Lessons for the Regional Digital Divide”. Web.

Servon, Lisa J (2002). “Bridging the Digital Divide: Technology, Community and Public Policy. “ Web.

Keniston, Kenneth & Kumar, Deepak (2004). “IT Experience in India:Bridging the Digital Divide”. Web.