We are going to talk about project delivery methods in this conversation. We believe that the chosen project delivery method is one of the main keys to success in the process of a projects realization. Especially, it is important in the modern unstable business conditions. First of all, we have to define the term project delivery method. One of the most appropriate definitions of this term is the following.
A project delivery method is a system used by a particular organization for organizing and financing the different projects: design, construction, operations and supportive services. Such system is usually implemented via legal agreements with other entities.
There are a lot of types of a project delivery method. Among them the following ones may be pointed out:
Design Bid Build or Design Award Build (An owner develops a contract, where all the details are mentioned. Then the contract is assigned to the bidder that proposes the lowest price)
Design Bid Build with Construction Management (A special construction manager is hired. An owner shares work and risk with this construction manager)
Design Build or Design Construct (A conceptual plan is developed. The following step is to accept bids from contractors that develop construction plan)
Design Build Operate Maintain (A single contractor performs all the tasks, related to the process of a projects realization)
Build Operate Transfer Integrated Project Delivery
We have chosen the last delivery method for the project under consideration. A formal definition of this method may be the following.
Integrated Project Delivery (IPD) is a project delivery approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to optimize project results, increase value to the owner, reduce waste, and maximize efficiency through all phases of design, fabrication, and construction (Integrated Project Delivery: A Guide).
Of course, this particular project has some advantages and disadvantages. Among the advantages the following ones may be pointed out:
It is based on partnership and collaboration. As a result it is possible to talk about synergetic effect (every party is able to bring something positive in the project);
It helps to use time in a more effective way and meet all the deadlines (we have a clear plan and deadlines);
It can be used via the newest technological solutions (it will eliminate possibility of mistakes, human factor and wasting of time);
It is easy to implement (there are a lot of guides and even special developed instruments).
Among the disadvantages of the chosen method the following ones may be pointed out:
An owner is responsible for the all project management functions (we cannot be totally sure in the degree of professionalism);
Project costs are known later. As a result, it is more difficult to prepare some budgets;
The costs of a projects realization may growth (it may affect our budget and financial abilities).
Finally, we have to explain why this approach can be applied to the project under consideration. We believe that there are a lot of reasons. The main reasons are the following:
It responds to the character of the project (this particular project in this particular area can be realized in the optimal way, using this approach);
It responds to the preliminary budget (other approaches are more expensive according to preliminary estimations);
It has already proven its efficiency (we can mention experience of other companies and other projects);
There are reliable specialists that can realize this approach (we can use outsourcing to implement the approach);
There are special IT solutions (generally, these solutions are not very expensive and they are easy to implement).
How ready the information technology team in an organization is to deal with incidents of security counts a lot during incident-response. Some organizations only get to know how to tackle security incidents after they have had one. By then, these incidents always turn out to be more expensive that they would have been if planned for earlier. An analysis of the development of an incident-response policy revealing processes like the formulation of an incident-response team, a disaster recovery process, and a plan for business continuity at Gem Infosys to minimize network downtime when security incidents occur in the future.
Incident-response policy
It is important first of all, to reduce the number and seriousness of security incidents. Security incidents cannot be totally prevented; therefore, it is advisable to minimize the impact and network downtime. This can be achieved by formulating and enforcing security policies and procedures, acquiring support from management for policies on security and tackling of incidents, and regularly assessing of vulnerabilities in the organization. Routine checking of all systems and network appliances to ascertain that they are updated, introducing training on security for the IT staff and users, and formulating an Incident response team to handle incidents of security also help minimize network downtime.
Developing an incident response team
An incident response team comprises of people with the duty of handling incidents of security with well defined responsibilities that guarantee that all areas of response are covered. Bringing together a team prior to an incident taking place is vital and will contribute to the successful handling of incidents (Conclin et al, 2012). An efficient team will supervise systems for any breaches on security, record incidents of security, endorse for awareness on security within the organization to help in minimizing security incidents, research on new attack strategies while updating existing systems, and building new technologies for reducing security risks. After creating an incident response team, the team should be trained on the correct use and position of important security tools, and collecting all necessary communication data. All information on emergency systems should be put in a common location. It may comprise of crucial passwords, information on router configuration, important contacts, and duplicates of certified keys.
All the members of the incident-response team should know what is required from them in case of an incident. They are expected to revise the incident response policy in detail. An incident response plan entails performing an initial assessment, reporting the incident, controlling the damage and reducing the risk, classifying the type and seriousness of the incident, and protecting the evidence. Recovering systems, putting together incident documentation, measuring the damages and cost incurred by the incident, reviewing of response and renewing of policies are also duties of the team.
Disaster Recovery Process
The disaster recovery process generally lies on how serious the security breach is. First of all, it should be determined whether the initial system can be repaired and still function properly or whether the system needs to be built again. Restoring of data ultimately depends on the backup created. A good backup will always give an alert in cases of any damage. Without a good backup, an incident can damage the systems for a long time before realization. During the incident response process, it is advisable to ascertain the time the incident lasted.
Conclusion
A business continuity plan is important in keeping the business running even after an incident attack. Gem Infosys needs a business continuity plan that is supported by secure and international IP infrastructure that helps in quickly recovering from all types of incidents. The most important element of a business-continuity plan is network continuity (Snedaker, 2007). Network downtime can be reduced by combining network facilities to back up, recover or protect the important services of communication, and data. A good business-continuity plan ensures that people remain connected to each other and to suppliers and consumers despite the extent of the incident.
References
Conclin, A., White, G., Williams, D., Davis, C, Cothren, C. & Schou, C. (2012). Principles of Computer Security CompTIA Security+ and Betond (Exam SYO-301). New York: McGraw Hill Prof Med/Tech.
Snedaker, S. (2007). Business Continuity and disaster recovery planning for IT professionals. Amsterdam: Elsevier.
Sufficient number of soccer fields to host the tournament
Sufficient funds to successfully conduct the soccer tournament
Favorable conditions for the tournament
These deliverables may be achieved if the committee considers all possible details important for the project (see fig.1).
Developing WBS to Alleviate Problems that Occurred during the First Meeting
Developing a WBS would help Nicolette avoid the problems experienced during the first meeting by enabling her to have objective targets that need to be achieved for successful completion of the tournament. The WBS would help her identify the main objective of the tournament and the expected deliverables. From these, she would be able to objectively identify the processes and programs that need to be developed to ensure the successful implementation of the activities of the tournament. Use of the WBS would also enable her to develop an objective timeline for the achievement of the deliverables desired. This would ensure the efficient utilization of time and other resources available for the tournament, ultimately ensuring that the tournament is carried out successfully.
The problems experienced during the first meeting would be thus dealt with effectively. Time wastage due to the discussion of issues unrelated to the objectives of the tournament would be avoided (Larson and Gray 124). Allocation of specific duties to committee members by virtue of their specialities would be possible, all geared towards the achievement of the objectives set out for the tournament.
Where Additional Information Can Be Found
Additional information on the use of WBS to assist in the planning of the tournament can be obtained from scholarly journals that extensively detail the use of the work breakdown structure in projects. Books on the same topic would also be hardy in gathering the information relevant for the successful implementation of a project through the use of the work breakdown structure.
Another relevant source of information on the use of the WBS would be individuals who have experience using the system. Project managers in consultancy firms would be the ideal people to consult in this case and would provide information relevant of how the tournament can be successfully run through the use of the WBS. From such experienced individuals, information based on actual real life project management through the use of the system can be obtained.
How WBS Can Be Used to Generate Cost Estimates for the Tournament
The use of the work breakdown structure requires that all the resources that will be used in a certain project to be accounted for. This includes financial, time and human resources. To obtain the cost estimates of hosting the tournament, the various activities under the deliverables of the project have their costs estimated accordingly. The costs of each activity will be determined by the budgetary allocations of the financial resources available for use for the tournament. In this way, the most objective adjustments will be possible for each activity to enable the development of a definitive cost estimate for the project. These estimates will enable the development of the most appropriate budget for the tournament, based on the available financial resources. The cost estimates may also be useful in determining the sources of funding for the project.
Works Cited
Larson, Eric and Clifford Gray. Project Management: The Managerial Process. Boston: McGraw-Hill/Irwin, 2010. Print.
The cyber-world has been often described in many books as a fascinating place and has been used as the basis for many computer games and movies. With the development of technology, including that of gaming consoles, the game world has become a form of cyberspace that people escape to. There is a lot of literature on this subject, as well as technological simulators of the unreal world.
Snow Crash by Neal Stephenson describes a virtual world, in which people can upload themselves and lead an active lifestyle. Although this idea might seem farfetched and unrealistic 10 years ago, it is becoming easier and easier to imagine a world that exists in a computer-software program nowadays. The author describes the world as a natural, real but very different space. The purely black floors and walls create a surreal image. On the other hand, the author reminds the readers that this world is a virtual reality by indicating that people draw their characters in the computer world. The paradox is that these characters are actually sets of programming codes in a computer while connected to players who are real people. In addition, the line between different financial classes is drawn as it is in real life. The book Snow Crash reminds the readers that there is no escaping this division. Both the real world and the virtual one are created by people and so the attributes of the natural world transfer over to cyberspace. Those with efficient money purchase high-performance computers equipped and beautify their characters. They can spend more money and have a unique and high-quality image of themselves in the Metaverse. However, people, who are in lower financial classes sometimes cannot afford computers, therefore, they simply miss out on the experience. Even if they do own one, their only option is getting a cheap black and white avatar, which is solely a simple standard form. The resolution and pixels answer for the quality of a persons representation in the cyber world. This connection between the moral and physical realities to the real world makes the computer world rather real. Even though the fact that characters in games can walk through one another extents it from the real world, the moral principles and understanding of the material world seem real. Similar setbacks that are presented in the real world are shown in the computer world too. At the time this book was written, it probably did not have an impact on its readers because the technology has just started to develop: people could not imagine these high-tech possibilities of computers. But for the younger generations, whose daily lives involve a lot of computer-related activities closely, are affected by the book as the ideas in the book became rather attainable. The author creates an idea that the cyber world is real. Todays video games are based on the closest representation of reality. The more realistic a game is the more popular it is. The development of virtual reality technology has reached heights that were not even imagined 20 years ago. There are ways to experience physical sensations, smells and other real world stimuli through gaming experiences. The expressions of different forms of stimuli make the cyber world more realistic than ever.
A short story by Vernon Vinge called True Names is another representation of the world in a computer reality. One of the interesting things about the story is that it closely connects itself to the real world, including the emotions and thoughts people experience in reality. In the end, these connections make the computer world seem real. It adds to the physical and moral representation of the computer world. A person, who is immersed in such reality, starts to experience the computer world as he or she does in the real one. With computer viruses and software that can influence brain waves and thus change mind states, such a virtual world becomes even more real. Cyberspace can be compared to peoples dreams, in which people do not use their eyes or ears to experience the dream world. Nonetheless, very often, dreams seem very real. The individual feels the feelings and physical concreteness of the dream and the mind tricks itself into believing that the dream world is real. The same can be said about virtual reality. The conscious mind forgets that a person is in the computer world. The simulation of feelings and thoughts becomes so real that a person believes in the reality of the computer program and spends numerous hours in the cyber world. There is a lot of evidence in the present times that supports this. The games and computer programs are so interactive and realistic that a person can spend a lot of time immersed in the game. There are numerous stories about people who live in a world of computers and virtual spaces. Hackers are one group of such people. They spend their time in this virtual world analyzing programming codes and bugs. Comparatively, other users do not know as much as professional programmers or hackers do. Most users can only purchase softwarea game to experience things they would not be able to experience in the real life. Getting access to such technology created a world of opportunities that allow game players to visit some far-off places on the planet without physically going there. For instance, virtual computer software allows people to take a tour around the Pyramids, climb the tallest building. Such software is made very realistic that the users feel like they are touring around in real life. Thus the travel expenses and time are saved. Many games are a close reflection of the short stories that give people an opportunity to interact with others in a 3-D world. To this day, the literature describing virtual worlds has become very extensive. The technological age has widened peoples imagination and the possibilities that can create a different and realistic world. It would not come as a surprise if in the next decade people will almost absolutely live in a computer world, having occasional breaks to eat and drink.
Books, movies, and video games all create a world that is separate from the real one. People have always needed to escape into the world, which they can create and shape. Computer technologies have allowed for this and the advances are very impressive. The studies of the human brain gave a base to software, which can influence the receptors in the brain and make a person believe that the virtual world is real.
Krames book What the Best CEOs Know: 7 Exceptional Leaders and Their Lessons for Transforming Any Business gives some of the most successful individuals who incorporate organizations (2003). This essay focuses on some of these individuals in chapters 2 and 5 of the book. It compares and contrasts the experience of Michael Dell who is the founder of Dell computers with those of Andy Grove who represents Intel Inc.
Michael Dell and His Contribution in the Field
The professional under discussion in chapter 2 is Michael Dell who the former CEO who saw the organization become a multibillion firm. His contribution was in the field of domestic consumer products. Some of the initial products produced were air purifiers that are used in homes and offices. His strategy involved direct interaction with consumers at a time that he was involved in the sale of personal computers in college (Dell & Fredman, 1999). His strategy also involved making consumers the center of his business. With this policy, he ensured that product development, design, and the final production processes were informed by customers requirements.
The supply chain that he created in the company was also dependent on the customer. The chain continued to be the main strategy for the company. The company adopted the bottom-up approach in the production process. This option was informed by the mass customization that the company had adopted (Dell & Fredman, 1999). As Krames states, Dells direct model of mass customization was not born of any desire to revolutionize an industry. Instead, it was founded on a bottom-up strategy based on customers needs and preferences (2003, p.59). Most companies at the time would guess what customers need and use the final products to assess such clients preferences. Dell on the other hand used a non-implosive customer approach (Krames, 2003, p. 59). One of the products that were developed because of this strategy is the Olympic computer (Krames, 2003, p. 67).
The resistance that Dell Faced
One of the challenges and resistance that Dell faced was in the application of the customer-based strategy in the organization since some of the managers who opposed this approach forced their views on the customer (Krames, 2003, p. 59).
Andy Grove and His Contribution in the Field
In Chapter 5 of the book, Krames focuses on Andy Grove as the professional who is credited with the establishment of one of the most successful companies that are currently listed as public companies (2003). Grove is one of the pioneers in the Silicon Chip industry. He is also credited with the establishment of Intel Corporation, which was at the forefront during the development of Silicon Valley and the worlds leader in the development of semiconductors (Jackson, 1997).
Groves contribution in the industry was mainly in acting as a leadership figure through his immense knowledge and leadership skills. Such skills were especially necessary for the organization in times when drastic changes and decision-making were needed (Krames, 2003). He is one of the professionals who followed Moores law in the provision of chips from Silicon Valley. At the organizational level, he ensured that the most effective methods of manufacturing and marketing were utilized.
The resistance that Grove Faced
The challenge that Grove faced, which turned to be a source of resistance during his reign, was the evident lateness in engaging the market. The challenge was in terms of product development, which contributed to losses for the organization. He states that the company was late in the change to newer technology and movement to newer installations and factories.
Similarities and Differences
Dell is discussed as being focused on consumer satisfaction, with this strategy being the main driver for its success. On the other hand, Grove ensured that effective marketing was applied at Intel (Krames, 2003). Both were leaders in their respective capacities, thus leading to the development of a new way of doing things in organizations and industries where they operated. Dell computers followed the strategies developed by Dell. These strategies were focused on the development of products that consumers could relate to. In a contrast, Grove considered developing revolutionary products in the industry, thus setting the pace for the other market and industry participants (Jackson, 1997).
Factors that Influenced their Success
Intel can attribute its success to the qualities that were displayed by Grove, especially self-confidence and the belief that he held in his products. The institution adopted the same, thus applying outsiders perspectives and a belief in their undoing. Dell, however, believed in the power that consumers possess, thus developing strong links with them and doing away with the existing tradition of having middlemen in any process and supply chain. Some of the factors that led to the success that the two enjoyed are also similar. Both had a personal love for the industries in which they operated. They also had proficiency in the electronics industry. They also demonstrated strong leadership skills, with the environment in which they were in also facilitating the development of the skills. Both also demonstrated entrepreneurial skills at a young age with the driving slogan of never giving up in their ventures.
Reference List
Dell, M., & Fredman, C. (1999). Direct from Dell: strategies that revolutionized an industry. New York, NY: Harper Business.
Jackson, T. (1997). Inside Intel: Andy Grove and the rise of the worlds most powerful chip company. New York, N.Y.: Dutton.
Krames, A. (2003). What the best CEOs know 7 exceptional leaders and their lessons for transforming any business. New York, NY: McGraw-Hill.
Network neutrality and why the internet so far has worked under it
Network neutrality, also in a short form referred to as net neutrality, is more of a freedom call for the internet users not to be restricted by internet service providers on the web content they can access or that they can upload on the web. According to Farber, this has made it a grand issue on a policy that is bound to change the use of the internet by both businesses and individuals. At its inception, the internet has continued to run under this principle which has been instrumental to its growth and new ideas. Farber goes on to explain how the concept of evenly balanced access to web content has contributed to the drastic growth seen in internet use.
Those for and against network neutrality and their reasons
The customers of the network providers support network neutrality and propose a regulation that will favor them. They argue that allowing broadband carriers to manipulate what is accessed online would be detrimental to the principles behind the success of the internet (Farber 451). Farber goes on to illustrate that shortly, a huge number of web users will face restrictions from internet service providers who discriminate against certain online activities. Stakeholders in the internet industry have pushed for legislation to control internet principles. Most suggest that broadband carriers are impartial and will not protect proper legislation. New trends on the internet will limit the choice of broadband carriers. The legislation should control the activity of the broadband carriers.
Control of information should not be left to the carriers as they may be biased. This will result in total domination of the online services by the carriers. The take of the network providers on the matter is from a different angle. They believe that the extra mile they take is not appreciated when they spend a lot of their resources on network infrastructure. Other companies which are their customers do not pay more yet they use this infrastructure so heavy that strains the network maintenance budget. As Faber explains, the strategy of the internet service providers is therefore differentiation of price charges about bandwidth consumed by content delivery on the internet. Service providers argue that this is fair considering that a fraction of less than 10% of its customers uses more than half the capacity of the local network without extra pay.
Impact of tiered service on corporations, individual users, and government organizations
When access to the internet is based on ones ability to pay and their significance, then the largest losers will be individual users who cannot compete with government organizations or corporate institutions. With priority given to premium users, then the individual users will either lose connectivity or be limited in their access to websites. Further, it would enhance socio-economic classism with the rich few accessing information that others cannot because of their inability to pay for it.
Why I am in favor of network neutrality being enforced by legislation
By legislation enforcing it, unwarranted discrimination will be prevented to earn money at the expense of our freedom. Also, legislation is grounded on the law and every person found culpable will be prosecuted according to the law.
The citizens of the United States took the news of the NSA spying on them with shock and discomfort. Nobody fancies a breach of his or her privacy, especially when it entails access to the information we consider as private (Ekins, 2013). When the whistleblower broke the news that the NSA was running an illegal program that was spying on the US citizens, there were different emotions experienced by the people (Hill, 2013).
Main body
The information leaked by the NSA operative proves that the government through its NSA agency has been carrying out illegal surveillance on its citizens. One is forced to wonder for how long these violations have existed. NSA was established in the year 1952, and it has been monitoring communications since then. It acquired its staff from the other intelligence agencies to form an elite intelligent service with unrivaled competence (Pappas, 2013). By going through loads of data, the NSA can pick anything of interest. This is like searching for the proverbial needle in a haystack. The agency has set up listening posts all around the world including having an office at the NASA mission control center. Since the year of its establishment, NSA has been collecting data from all over the world. NSA since time immemorial has been spying on US citizens. It is on record that, NSA sought a New Jersey couple, which was leaking information to the Soviets through surveillance of their communication channels (Ekins, 2013).
Some parts of the media have reported that we ought to have realized that this operation was in progress, and people should not protest (Sulmasy, 2013). This does not mean citizens should not protest against the government invading their privacy.
It is worth noting that as much as the NSA has set up this operation legally. It has done so in secrecy. This means that even its budget is unknown. US citizens are often conservative when it comes to questioning activities carried out under the tag name of national security (Sulmasy, 2013). This operation carried out by the NSA is illegal. The people should speak against it, and demand for its closure after the government has released an official statement on the matter in concern. The sad bit of this revelation is that other private companies were involved in some way. The slide with this revelation does not state whether the companies gave up the information knowingly or unknowingly. The companies mentioned have denied any knowledge of the operation, and they have stated that they never gave out any data unless there was a court order involved or search warrant (Pappas, 2013).
The law categorically states that the US justice department must prove that proposed surveillance will not target citizens abroad or US citizens, residents. It states that the justice department should comply with the fourth amendment (Ekins, 2013). The actions by the NSA are violating internet privacy and basic liberties of the people. Surveillance operations were introduced by the administration of Bushs government, but President Obamas government has continued to carry out these operations (Hill, 2013). Historically, America has continued to state that it will uphold and respect human liberties and human rights. Nothing should compromise this commitment made to the citizens of the US. The constitution primarily prevents the federal government from conducting any unreasonable searches and seizures on its people. This longstanding commitment has provided the foundation of Americas policies and laws. America as the greatest democracy should not give up these principles because it will be the biggest betrayal of its citizens.
The US needs a transparent government that respects the privacy of its citizens. We need to resist secret laws that infringe the privacy of US citizens. The government officials might argue that this operation is necessary to keep citizens safe, but that does not mean we should accept this violation of citizens rights (Pappas, 2013). The government has the resources necessary for maintaining national security without violating the citizens rights. They should use these resources, which do not violate any law. These secret operations only serve the sole purpose of satisfying politicians crazy fantasies. It is dangerous for citizens to put up their trust in politicians who are guided by power-seeking motives. The current president of the United States should explain the real motivation behind this program, and explain to his people its relevance. These revelations come during a period when the government is being questioned on its credibility with the people. The president has failed to be honest, forthright, and credible on this surveillance program (Hill, 2013).
Conclusion
The real debate here does not lie about the effects of the spying program on technology. The problem lies with the law, which does not prohibit the federal government from doing with technology (Ekins, 2013). The law does not protect the basic principles under which this great country was founded. The government is supposed to protect the privacy of its people, but as per the NSA revelation, the government is the greatest threat to citizens privacy. The government can survey, and use the data obtained against us. This program by the NSA should be strongly condemned and stopped because it is against the law.
References
Ekins, E. (2013). Public More Wary of NSA Surveillance Than Pundits Claim. Reason. Web.
Hill, K. (2013). How Americans Views On Surveillance Have Changed Over The Last Decade (Or Rather Not Changed). Forbes. Web.
Pappas, S. (2013). Privacy Concerns NSA Surveillance. Live Science. Web.
Sulmasy, G. (2013). Opinion Sulmasy NSA. CNN. Web.
The idea that the current paper discusses is the utilization of quantum computing for the decision-making and problem-solving processes. The difference between personal computers and their innovative counterparts was addressed in detail by the CEO of D-Wave named Alan Baratz. He proposed to pay more attention to the possible startups that could utilize software to resolve fundamental physics and business problems at the same time. This approach was found to be rather useful because it broadened the horizons for conventional developers who only worked with non-quantum computers.1 Owing to Baratzs efforts, quantum computing quickly became the most important area of development for D-Wave, as the CEO also realized that it could be crucial to utilize the advantages offered by the quantum medium.
The most important point about the idea of quantum computing at D-Wave is that the company quickly realized the marketing potential of it and took over the quantum computing market. The essential advantage of quantum computing is the presence of dedicated cloud services that make it possible to store and pull data from the online database at unbelievable speeds.2 The strength of cloud-based initiatives makes quantum computing practically unreachable because each effort exerted by teams interested in this innovative technology allows them to overcome the skepticism and advance both in terms of hardware and software.
Potential of the Idea
The first reason why the quantum computing technology has potential is the ability to translate common physics into hardware and software solutions that allow researchers to combine particles and achieve incredible results where such particles may turn into anything of interest to the developers. The benefit of quantum computing, in this case, is that particles may assume the superposition when not observed, allowing the developers to increase the number of potential combinations of particles.
The second reason that validates the potential of quantum computing is the advent of qubits that can store much more values than their conventional counterparts (bits). All 16 combinations within the qubit can be effectively accessed at any moment, while also creating a link between two qubits that react to one anothers transformations in real-time.3 Therefore, the limited capacity of conventional cloud-based services comes to an end with quantum computing and qubits.
The utmost reason to perceive quantum computing as a technology of the future is the concept of quantum teleportation that creates room for quicker communication among data qubits. The experience of D-Wave shows that qubit manipulation may also be helpful, as the developers get multiple chances to acquire a single output while having numerous input points. Regardless, a decision-making system based on quantum computing uses this single output to compile all possible answers.
Scalability of the Idea
The first way to scale the idea of quantum computing would be to develop improved materials modeling and allocate more resources to the issue of materials simulation. Even though the technology is mostly implemented to resolve physics-related inquiries, its business potential may also be related to the possibility of simulating the outcomes of complex decisions on a long-term scale.4 Compared to classical computing, it would be harder to deploy its quantum counterpart, but the possible increases in speed and accuracy of operations would become irreplaceable.
One more attempt to scale quantum computing would be to optimize the analytical capabilities of the organization and pay more attention to the financial side of the implementation process. The sequential background of classical computing would not support an increased number of variables, which means that quantum computing could be scaled through resource optimization.5 The parallelism inherent in this technology might be used for many business areas such as task scheduling, planning, or managing resources.
The last option available to the researchers interested in scaling quantum computing would be a stronger focus on cloud services and database optimization. The increasing demand could be managed with the help of nonrelational database capabilities such as better indexing and higher speed of query processing.6 Any unstructured data sets available to quantum computing users would be evaluated by tech specialists, who would also disclose the potential improvement areas and see if the innovation could be scaled further.
Scalability Challenges
The first challenge affecting the scalability of quantum computing is the presence of computational issues that stem from the fact that the area of utilizing qubits is still rather underresearched. Even though there are more opportunities for data processing and analysis, it would be harder to scale quantum computing in the case where simulations are incredibly costly and put a strain on the organization. Therefore, computational issues decrease the validity of potential solutions to long-standing business problems that could not be addressed by conventional computing tools.
Another problem is the so-called quantum supremacy that would make it harder for the organization to reduce the rate of errors. There will be no chance to scale technology effectively when the environment does not respond to the essential needs of end-users. An incredibly high rate of error is going to avert many organizations from investing in quantum computing, as it would be perceived as a risky asset.
Overall, the scalability of quantum computing could be crucially affected by the high cost of deployment and maintenance of this innovative technology. The sources of noise and high temperatures would have to be removed completely; otherwise, the whole system would become ineffective and unstable. The team is going to spend an extremely high amount of resources on developing the technology to an extent where it would become less expensive and more beneficial in terms of its price-output ratio.
Latest Innovations in BA Management
The current field of study is BA management, and the most probable position after graduation is either business or data analyst. The first possible innovation that could influence my professional career is the advent of numerous business management applications that strengthened the impact of cloud on management and allowed for improved scalability for the same price.7 Therefore, an increased level of flexibility and cost-effectiveness makes it evident that the cloud is a technology of the future.
Another crucial innovation that I might be able to benefit from is business intelligence. It could be helpful in terms of generating complex data reports and forecast spreadsheets. The main advantage of business intelligence is that it can be integrated into existing infrastructure harmlessly. Even small- and medium-sized enterprises are currently making the best use of business intelligence, which means that it does not affect the companys budget drastically.8
The last technology that I might have to watch out for as a business analyst would be content management systems. The prevailing nature of data collected by corporations across the world makes it safe to say that static websites have become obsolete and have to become a part of the legacy software unit. Even though it would be costly to hire a specialist capable of setting up the given content management system properly, the ultimate outcomes are generally positive and cannot be overlooked.
References
Covers, O., & Doeland, M. (2020). How the financial sector can anticipate the threats of quantum computing to keep payments safe and secure. Journal of Payments Strategy & Systems, 14(2), 147-156.
Cusumano, M. A. (2018). The business of quantum computing. Communications of the ACM, 61(10), 20-22.
Liang, T. P., & Liu, Y. H. (2018). Research landscape of business intelligence and big data analytics: A bibliometrics study. Expert Systems with Applications, 111, 2-10.
Mohseni, M., Read, P., Neven, H., Boixo, S., Denchev, V., Babbush, R.,& & Martinis, J. (2017). Commercialize quantum technologies in five years. Nature, 543(7644), 171-174.
Rikhardsson, P., & Yigitbasioglu, O. (2018). Business intelligence & analytics in management accounting research: Status and future focus. International Journal of Accounting Information Systems, 29, 37-58.
Business seeks to use AI instead of people and to attract investment for the robotization of production processes in the enterprise for various reasons. For example, ensuring consistently high-quality products, reduction of the production cycle; increased technological production flexibility; reduction of staff turnover; maximizing profits by saving on costs (Garza, 2019). This paper will focus on the minimization of problems associated with the human factor.
A full evaluation of staff performance is crucial for maintaining a business. Nowadays, one can observe the implementation of AI technologies for this task. Such technological decisions are not the least motivated by the minimization of human errors associated with organizational risks (Garza, 2019). Thus, an employee evaluated by AI can feel relieved and motivated, since this evaluation is free from personal attitudes and illogical decisions. At the same time, a manager or supervisor can experience disadvantages because a full performance evaluation is a complex procedure, on which business processes depend crucially, and thus, they lose control over the business maintenance.
On the other hand, when one speaks about feedback during the ongoing process of performing job tasks, the advantages differ. It might be a distracting factor for an employee due to psychological reasons (for example, having to deal with a machine while tackling the work routine). On the contrary, a manager can benefit from such a decision: they can have more time for the concentration on more important tasks instead of wasting time on routine tasks.
If one considers a whole replacement of middle or even top-level managers by the AI, the possibility of such a scenario depends on a type of business. If a business is associated with simple routine tasks and much of the work is automated already (for example, delivery businesses), this decision is possible and even desirable for the reason the business owners and investors will benefit from it. On the other hand, if a business deals with challenging intellectual activities, a replacement of human top-level managers with AI would still not be conceivable since much of and performance in such business depend on complex human brainwork, which is still not perceived by AI.
Smartphones have replaced a significant amount of technologies that people use for work and everyday life. On the one hand, it has become much cheaper to buy a smartphone than previously all the technologies people used daily in the 1990-s. Consider radio, personal stereo, earphones, cameras and camcorders, CD and MP3 players, tape recorders, and many other devices. Cichon (2014) counted a difference: $3054.82 in 1991 (equivalent to $5100 in 2012) against an affordable cost of a smartphone. For me, personally, a smartphone has substituted CD and MP3 players, cameras, and credit cards (I use a paying device in my smartphone).
There is a common belief that all of the old-fashioned technologies replaced by smartphones will discontinue sooner or later. I would like to express my disagreement with this statement: I suggest that none of these technologies will disappear completely; instead, they will continue to exist as valuable vintage technologies for connoisseurs. They will offer their owners and potential buyers a feeling of exclusivity and well-known flavor of good old stylish vintage. This happened, for example, with vinyl records and players, the same is true for tapes. Although they are not a part of mass production and consumption, they have their own audience with special symbols and values attached to them.
Considering the prices of smartphones vs. the price of old-fashioned technologies, one could say the question of comparing prices is more ambiguous than it seems. When buying technologies in the 90s, people could be sure that those technologies are long-lived and the prices are final. Today, technologies and ideology behind them are designed to make people buy new smartphones much more often. Moreover, with an acquisition of a smartphone, ones purchases do not cease, making it difficult to count the real cost of this technology.
The Internet has changed advertising significantly and nowadays the Internet marketing demonstrates a very high marketing potential, also reducing costs of advertising. The term Internet marketing has been firmly established in business and academe, referring to the promotion of products, services, and brands through the Internet. It involves display ads, email marketing, search engine marketing, social media marketing, and other methods. Internet marketing, or online marketing, has two main advantages for business promotion. First, it is a valuable source of consumer data: demographic characteristics, patterns of everyday life, preferences, previous consumer choices, and identities. Second, online spaces provide various opportunities for access to the consumers themselves: a number of targeting instruments, which have been becoming more and more sophisticated and nuanced. These are the main differences between online and traditional marketing, where the coverage is just about its scope and size.
Another distinction lies in ads presence in peoples everyday lives: Internet marketing is everywhere in online space, making it difficult to distinguish between real content and ads. This fact leads to a suggestion that online marketing is more effective in persuading and communication. I believe that its most powerful method is social media marketing since it uses humans most fundamental desire to belong to a community (Husain et al., 2016). At the same time, I find display ads and email marketing to be most annoying because such advertising does not adapt to ones purchase cycle and, thus, often appears to be irrelevant.
Husain, S., Ghufran, A., & Chaubey, D. S. (2016). Relevance of social media in marketing and advertising. Splint International Journal of Professionals, 3(7), 21-28.
Natural language processing or NLP is a specific field of artificial intelligence that is focused on programming computers to analyze natural language data and be able to process it easily. The phases of NLP include analyzing each words structure, parsing sentences, and conducting semantic analysis (understanding the meanings of words and relationships between words) (Kalyanathaya et al., 2019). The next phases are discourse integration (linking new language content to what has been said before) and pragmatic analysis (establishing the actual meaning and goal of the text) (Kalyanathaya et al., 2019). Today, NLP positively affects business by giving rise to applications and software that reduce language barriers in international trade, handle some customer service requests, and let businesses benefit from commercial AI assistants.
NLP: History of Development and Purpose
The history of NLP can be traced back to the 1950s when the global scientific community became interested in exploring computers ability to demonstrate intelligent behaviors and imitate human thinking. The introduction of the Turing Test in 1950 (the test evaluated computers ability to support conversations with people and be mistaken for humans) is often referred to as the very start of NLP and AI (Kalyanathaya et al., 2019). Seven years later, Noam Chomsky revolutionized the field by introducing a rule-based system of syntactic structures, which further supported the development of NLP (Kalyanathaya et al., 2019, p. 199). Unlike many technologies that were initially expected to bring substantial profits, NLP was developed out of the need to further explore the uses of computers when working with texts, thus giving rise to helpful applications.
Positive Impacts of NLP on Business and Practical Implications
Although its ultimate goal (enabling computers to understand and generate adequate and meaningful speech) is a long-term one, NLP has partially fulfilled its promise in many fields of activity, including business and manufacturing. For instance, NLP has given rise to text mining software, machine translation applications, and systems to automate manufacturing processes (Kalyanathaya et al., 2019). Nowadays, NLP-based software is very common and continues to change some aspects of doing business, such as improving the quality of customer service.
NLP and applications based on it have a positive impact on businesses all over the world by facilitating business processes, such as communication in international trade. Machine translation was the most popular application of NLP in the 2000s (Kalyanathaya et al., 2019). Nowadays, machine translation systems are often used in online international marketplaces to reduce language barriers between sellers and customers and support the latter by translating product descriptions into the languages that they understand. There is evidence that the adoption of different forms of AI facilitates international transactions, thus ensuring the success of international economic activities. For instance, according to the statistical study by Brynjolfsson et al. (2019), the introduction of eBays improved machine translation system has caused a 10.9% increase in exports on eBay due to a better quality of product title translation. Apart from product titles, machine translation systems in online marketplaces translate product reviews, thus enabling customers to make well-considered purchasing decisions. Based on the evidence above, NLP and AI facilitate small-scale international merchandise transactions by reducing language barriers between trading parties.
The existence of NLP also allows tech giants to gain further recognition and generate profits by developing and offering NLP-based personal assistant applications. There are numerous popular voice-enabled personal assistants, such as Alexa, Cortana, Siri, and Google Assistant, and the market for such products is predicted to exceed $4.6 billion by 2025 (Tulshan & Dhage, 2018). According to surveys, the best applications are capable of assisting customers with almost 60% of daily tasks, such as finding restaurants and hotels or translating something into a different language (Tulshan & Dhage, 2018). The effectiveness of modern intelligent virtual assistants constantly attracts new users, thus increasing the market power of todays tech giants even more. Although materials on how to build AI-enabled virtual assistants are found in open access, it cannot be denied that the development and testing of such applications require significant financial resources. With that in mind, it can be nearly impossible for smaller companies to benefit from AI and NLP by creating assistants that would be comparable to Siri, Google Assistant, or Cortana.
When it comes to business, customer service and enterprise chatbots can probably be called the most influential practical application of NLP. Chatbots or chat robots are computer programs that are capable of simulating human conversation by providing meaningful responses to voice or text messages by users. Unlike less sophisticated rule-based robots, AI chatbots based on NLP can understand the meaning of users requests and constantly learn from new information (Petouhoff, 2019). Thanks to their ability to learn, AI chatbots even make customers believe that they receive personal consultations from customer support specialists.
AI chatbots are becoming increasingly popular among different businesses that work directly with clients. For instance, according to the survey of more than 3000 service organizations conducted in 2019, 23% of respondents already had AI chatbots, whereas 31% were going to implement them within the following 1.5 years (Petouhoff, 2019). NLP-based chatbots are widely available nowadays, but the degree to which they are used depends on the industry. For example, as per the study by Salesforce, in the U.S., the industries in which such chatbots are used the most frequently include media and communications, technology, and financial services (Petouhoff, 2019). Although non-AI chatbots also have benefits in terms of customer service and can solve simple tasks and answer typical questions from clients, AI applications are often preferred due to the amount of flexibility that they offer.
Chatbots with NLP-powered functions positively affect business by automating many time-consuming but rather simple tasks that customer support specialists are responsible for. For instance, businesses commonly use AI chatbots to help customers to change their login details, check their account balances, or arrange appointments. It allows customer service agents to spend more time dealing with more complex issues, such as customer complaints. Based on the aforementioned survey conducted by Salesforce, more than 60% of customer service specialists having AI chatbots report being able to devote most of their shift time to solving complex problems (Petouhoff, 2019, para. 19). Therefore, due to AI chatbots contributions to communication with clients, businesses can significantly improve the speed with which they respond to requests from customers. Almost 80% of service organizations that have implemented AI chatbots report using them to provide customer self-service and collect information about more complex cases to facilitate agents work (Petouhoff, 2019). Thanks to these uses of chatbots, customers are not required to await their turn to be served or constantly repeat their details to any new agent working with them.
NLP and the Advancement of Social Causes
Apart from finding reflection in chatbots and other applications that businesses use to help their prospective customers instantly, NLP techniques can be utilized in research and opinion mining activities. It allows using NLP to improve an understanding of social issues that actually exist but have not been widely recognized yet. NLP methods can help to conduct sentiment analysis in many instances, thus allowing researchers to explore new social issues based on conclusions derived from very large datasets.
Additionally, to advance social causes, it is possible to program NLP-based conversational systems to provide adequate responses to inappropriate requests. For instance, sexual harassment is a social cause that can be advanced using NLP. Potentially, AI personal assistants can contribute to shaping negative attitudes to harassment by responding to sexual innuendos and verbal abuse in a way that would make users recognize the inappropriateness of such comments. According to the study of conversational systems responses to harassment, commercial NLP-based products, including Siri, Alexa, and Google Assistant, avoid engaging in conversations after receiving too many offensive requests (Curry & Rieser, 2018). Also, unlike other types of conversational systems, they give almost no responses that could be perceived as flirtation (Curry & Rieser, 2018). However, more training is required to make these applications fulfill their potential in promoting the moral standards of behavior.
Conclusion
NLP positively affects business since it enables prominent tech companies to maintain their leadership by developing voice-activated personal assistants that become enormously popular. Also, being used in machine translation systems, NLP helps to ensure mutual understanding between eBay sellers and buyers, thus facilitating global trade. NLP-based chatbots are used by customer service specialists in different industries. They benefit businesses by optimizing the workload on customer service agents, solving the issue of customer wait times, and helping to personalize communication with clients in an effective manner.
References
Brynjolfsson, E., Hui, X., & Liu, M. (2019). Does machine translation affect international trade? Evidence from a large digital platform. Management Science, 65(12), 5449-5460.
Curry, A. C., & Rieser, V. (2018). # MeToo Alexa: How conversational systems respond to sexual harassment. In M. Alfalo, D. Hovy, M. Mitchell, & M. Strube (Eds.), Proceedings of the second ACL workshop on ethics in natural language processing (pp. 7-14). Association for Computational Linguistics.
Kalyanathaya, K. P., Akila, D., & Rajesh, P. (2019). Advances in natural language processing A survey of current research trends, development tools and industry applications. International Journal of Recent Technology and Engineering, 7(5), 199-201.
Tulshan, A. S., & Dhage, S. N. (2018). Survey on virtual assistant: Google Assistant, Siri, Cortana, Alexa. In S. M. Thampi, O. Marques, S. Krishnan, K. C. Li, D. Ciuonzo, & M. Kolekar (Eds.), International symposium on signal processing and intelligent recognition systems (pp. 190-201). Springer Singapore.