Digital immortality as a logical step in human evolution is one of the possible outcomes of gradually increasing technological advance. This, and the idea that the human race cannot evolve any further in the traditional sense lead to assuming that the next step would be to transcend beyond human bodies of flesh to something much more complex, although much simpler substantially. Savin-Baden, Burden, and Taylor (2017) define digital immortality as the continuation of an active or passive digital presence after death (p. 178). This concept may sound simple, but it is much harder to achieve than to define. It would require a vast amount of computing power to recreate just one human brain with all of its complexity and most importantly ability to think in unconventional ways.
The problem with this concept is that it is highly likely to be impossible. Parkin (2015), in his article published on BBC website, states that the idea that a memory could prove so enduring that it might grant its holder immortality is a romantic notion that could only be held by a young poet, unbothered by the aches and scars of age (para. 7). Although there is no way to be sure about such things as what to expect from the near future, many researchers agree that technological progress will continue to gain speed and scale as the history unfolds rapidly. If the concept of digital immortality proves to be possible and achievable, humankind may solve a significant number of problems that may lead to extinction. However, if human ceases to be human in a traditional way, will it be possible to say that the race did not become extinct? Right now, there is only a possibility to theorize.
References
Parkin, S. (2015). Back-up brains: The era of digital immortality. Web.
Savin-Baden, M., Burden, D., & Taylor, H. (2017). The ethics and impact of digital immortality. Knowledge Cultures, 5(2), 178-196.
Web 1.0: also called the Read-Only era represented the static websites where the user got limited to reading information presented to him. It represented a one-way information flow just like a school library. It missed the interchange of information between consumers and the producers of information. The examples are the many static websites during the DotCom boom which presented the internet before 1999 (Mike, 2006). Many experts call it the hot-mail and fully static website era. The webmaster got concerned with updating the website and providing information to the users. Todays users get concerned with more than just information. This leads to the birth of web 2.0.
Web 2.0: also known as write-read-publish when the webmasters realized that the consumers of information needed more than just information. This got attributed to the lack of interactivity in web 1.0. Now users can read, write, publish, and edit information and share it with the rest of the world without fear of punishment. It has brought about active interaction of users with the webmasters. It got born in the year 1999 with the contributors being LiveJournal and Blogger. With the birth of this technology, even non-technical users can easily interact and contribute to blog platforms. In Web 2.0, it took users a few seconds to publish information compared to Web 1.0. It required effort and co-ordination among users, webmasters, and developers to do a minor change in Web 1.0. Some of the examples of Web 2.0 are YouTube, FaceBook, Twitter, Wikipedia, Flickr, etc (Neil, 2008). The webmaster shares the responsibility with the internet audience to make sure the internet becomes more informative and educative.
Web 3.0: also known as Semantic web has been able to provide analytical abilities and intelligence searching. It has brought about a gradual transformation of the web from an overloaded and dumb medium to an intelligent medium. It gets built-in cloud computing and the information can be shared in any computer architecture i.e. desktops, laptops, mobile devices, I-pads, etc. The search engines get built incorporating intelligence contextual searching avoiding keyword searching (Eduard, 2008). One of the examples of Web 3.0 is the Google search engine.
The following will be the benefit for businesses migrating from Web 2.0 to Web 3.0 (Mike, 2006):
Contextual Searching: the queries get interpreted just like a human brain. The info agents fill the gap in the tailor-made search thus deducing the best possible answer for your query. This saves the business time and they get the right information.
Tailor-made Searching: Web 3.0 will provide the business with easier searching for information. Direct answers get provided for any queries. It will save the business from wasting time on million of meaningless results.
Personalized Searching: Web 3.0 can read and understand personal preferences. The business has its unique web profile based on its browsing history. The business can get results based on web profile and preference.
Evolution of 3D Web: the businesses will reap the benefits of 3D technology, hence reaping the benefits of the virtual world. The businesses will also get the advantage of cloud computing or service-oriented architecture, where the business will be able to share applications without having to develop their own.
Interoperability: businesses will benefit from easier customization and device-independent provided by Web 3.0. Applications can run in all technology architecture i.e. computers, TVs, hand-held devices, microwaves, etc.
Since its inception, the Internet has greatly transformed our lives, and although it developed quite slowly, the number of users continued to swell over time (Weber, 2004). Presently, millions of users across the world access and use the Internet every day for work, leisure, or education. Initially, the Internet was not intended to be a channel for interpersonal communication. It was established by the US Department of Defense as a channel of communication by scientists located in different places. According to Windeatt, Hardisty, and Eastment (2000), the Internet is the most radical agent of change witnessed in the recent past.
Although the Internet has many benefits, it also presents serious challenges to users and organizations. Its usage has spread throughout the world, and it is being used for good as well as bad purposes. Even though no single organization controls the Internet, various organizations exist to set standards that should be followed by all users and service providers.
During its growth, the Internet had to undergo radical technological changes to cater for the increasing number of users. Fundamentally, the Internet is a set of diverse networks that correlate with each other on a mutual basis. The networks that make up the Internet are linked using devices such as routers, which facilitate the forwarding of packets from one node to another as they travel to their intended destination. This paper provides a discussion on the evolution of the Internet.
Evolution of the Internet
The Internet started in the early 1960s when the Cold War was at its peak. To support its research projects in different locations, the Advanced Research Project Agency (ARPA) opted to create a huge computer network for sharing data and programs (Pastor-Satorras & Vespignani, 2007). The creation of the network was meant to be a security measure to ensure that data and information belonging to ARPA remained within the system. The Advanced Research Project Agency Network (ARPANET) was later initiated by Lawrence Roberts, a research professional at the Massachusetts Institute of Technology (MIT). At first, the network was intended to link mainframe computers located at four different universities in the United States. It later expanded to include other public and private institutions.
Packet switching technology was invented by ARPANET and opened the way for the development of the Internet. According to Abbate (2000), packet switching was one of ARPANETs widely celebrated inventions and facilitated the movement of data packets across computer networks. Although it was fast, well-organized, and reliable, its implementation was quite complex. Consequently, some experts were concerned about its practicality. As explained earlier, the Internet was originally designed to support the research work done by the United States Department of Defense. The idea was then adopted by learning institutions before finally being embraced by the business world. With time, the number of nodes attached to ARPANET increased, and soon it was necessary to use more advanced technologies.
Rapid Growth
The example set by ARPANET inspired other public institutions, such as the United States Department of Energy that decided to follow suit. The adoption of the Internet by public organizations was later imitated by private institutions. As the number of connections increased, the need for advanced technology became apparent. Consequently, designers had to focus on technologies that could reliably support the increasing number of users and organizations.
Although there has been tremendous growth of the Internet as a result of the cooperation between different players in the technological sector, Tselentis (2009) argues that the evolution of the Internet may reach a standstill due to the evolution in technology that does not cater for the needs of every single individual in the society. Rather than focusing on their needs only, organizations should target innovations that are mostly affordable and all-inclusive.
The Transmission Control Protocol/Internet Protocol (TCP/IP) Development
The development of many other networks further stimulated the growth of the Internet. This was later reinforced by the discovery of the TCP/IP protocol. Generally, all computers that are connected to the Internet use the TCP/IP protocol to communicate. The Internet TCP/IP protocol suit consists of several protocols with TCP and IP being the most important ones (Tkacz & Kapczynski, 2009).
While TCP is mainly concerned with reliable delivery of transmitted data packets, IP helps with the unique identification of the various nodes present on the Internet. Whenever a host initiates the transmission, it attaches important details to the data packets being transmitted to ensure successful delivery. One of the strengths of TCP/IP is its reliability. Acknowledgment messages are returned to the sender in case of successful deliveries while negative acknowledgments are received if data sent fails to reach the intended recipient. The User Datagram Protocol (UDP) may also be used, but unlike TCP, it is unreliable, and the delivery of packets is not guaranteed. With UDP, no mechanism exists to let the sender know the status of the transmission. It is, however, faster than TCP and is normally used to offer streaming media services where reliability may not be a major concern.
Evolution of the World Wide Web
A clear distinction exists between the Internet and the World Wide Web. While the Internet is the infrastructure that supports operations, the World Wide Web refers to the collection of web pages that reside on different servers across the world and are accessible through the Internet. This is analogous to the idea of vehicles using roads to get to their destination. Without the road infrastructure, vehicles will not be able to move and are thus useless. Similarly, the World Wide Web is only useful if a reliable Internet infrastructure is in place.
The World Wide Web is an invention by Tim Berners-Lee and utilizes a global hypertext system that relies on the Internet to move information from one point to another across the Internet. Hypertext transfer protocol is used to facilitate the transmission of web documents from a server machine to a clients machine that initiates a request. According to Tkacz and Kapczynski (2009), the development and application of the World Wide Web made the Internet more available for users. To a large extent, the World Wide Web is responsible for the increased number of Internet users. For a long time, the World Wide Web was mostly text-based. Marc Andreesen later improved the work done by Tim Berners-Lee, making it possible for web documents to support text as well as non-text based content. The development of Web 2.0 is expected to open the way for Web 3.0, which is meant to revolutionize the Internet further as we move into the next generation. Among other things, Web 3.0 will enable developers to transform the Web into a database that will permit easy access to data.
Principally, the Internet uses a client/server model with servers existing in different places. A server may refer to hardware or software and is usually configured to provide services to client machines. On the other hand, a client is either hardware or software that accesses services from the server. Ordinarily, client machines are used by users to access resources stored in different servers. To communicate, both the client and server must use the same protocol or communication standard.
Conclusion
Unlike any other invention, the Internet has transformed our lives. It has simplified interaction and greatly changed the way human beings interact. As has been discussed in this paper, the Internet has positive as well as negative effects. While it creates opportunities for growth and development, it presents serious challenges that must be dealt with. Although people can easily store and disseminate information using the Internet infrastructure, security, and privacy are key concerns. Because of security lapses, for example, many organizations have lost important trade secrets that were leaked to competitors.
Undoubtedly, the Internet affects every part of our lives, and its influence will continue to permeate every corner of our society. Regardless of the negative effects, the Internet has simplified life. The world has become a global village, and reaching others is no longer a challenge. In the same way, businesses are no longer restricted by boundaries that existed in the olden days, and consumers have a wide variety of products to choose during a purchase.
Although the Internet has undergone so many changes over the years, it is obvious that continuous technological changes and the desire by humans to further simplify operations will define the future of the Internet. As noted earlier, the move toward the use of more sophisticated technologies is inevitable.
References
Abbate, J. (2000). Inventing the Internet. Cambridge, MA: MIT Press.
Pastor-Satorras, R. & Vespignani, A. (2007). Evolution and Structure of the Internet: A Statistical Physics Approach. New York, NY: Cambridge University Press.
Tkacz, E. & Kapczynski, A. (2009). Internet Technical Development and Applications: Technical Development and Applications. Berlin: Springer.
Tselentis, G. (2009). Towards the Future Internet: A European Research Perspective. Netherlands: IOS Press BV.
Weber, S. (2004). The Internet. New York: Chelsea House Publishers.
Windeatt, S., Hardisty, D. & Eastment, D. (2000). The Internet. New York: Oxford University Press.
Prior to Facebook, Twitter, MySpace, hi5, Whive and all other social networking sites, GeoCities dominated the market. The Internet seemed to focus more on GeoCities, and its impact was felt around the globe. Therefore, the origin of social networking can be traced from GeoCities, a company that existed in 1990s. This paper seeks to provide a genealogy of social networking. To achieve this, we will discuss the perceptions of GeoCities and its users on the World Wide Web, and continuities along with discontinuities between GeoCities, early and current media forms.
History of GeoCities
It is hard to speak about the evolution of networking without briefly looking into the history of GeoCities. GeoCities was a free web hosting company created in 1994. The company allowed consumers to create their own web pages by choosing the city that they preferred their web page to be classified in. This allowed users to personalize their pages. In 1997, GeoCities began controlling their web pages by introducing advertisements. During this time, its popularity increased. To promote awareness, GeoCities created watermarks for its web pages. This move made users feel that the company interfered with the look of their web pages.
Early in 1999, Yahoo owned GeoCities and changed all URLs to Yahoo. Yahoo also created its own logo for GeoCities. Aimed at discouraging free web hosting, Yahoo introduced fees on premium hosting. The company also reduced free web hosting by limiting the rate of data transfer to visitors using such web pages. Yahoo later linked GeoCities pre-paid accounts to its Yahoo Web hosting service that provided high data transfer rates to users.
GeoCities and the World Wide Web (1996-2003)
GeoCities imagined the World Wide Web as a medium that connected people who were far apart. The introduction of free web hosting services at its onset became an indication that the company promoted the creation of an online community. GeoCities did not impose fees to users. Users also had the freedom to personalize web pages as they desired. GeoCities created a social platform where people could communicate from different parts of the world at no or low costs. This made communication not only affordable but easily accessible.
As time went on, GeoCities began controlling its web pages. The company began introducing advertisements on its web pages. Although users did not like the concept, GeoCities aimed at making profits through advertisements. Then GeoCities created watermarked logo on the companys web pages to increase its awareness.
GeoCities gradually changed when it altered its ownership to Yahoo. Yahoo redefined the concept of World Wide Web as being communal to paying to be a member of the community. After purchasing GeoCities, Yahoo slowly eliminated free and low fee web hosting services by reducing data transfer rate. The company also introduced fast-pre-paid web hosting services with high data transfer speeds. The personalization of web hosting had now taken a new shape, and those who did not pay were partly disconnected from the world. Later on, Yahoo shifted users who had paid for web hosting to its web hosting services. This move indicated that consumers had to pay to access web pages. Rather than merging the community, Yahoo created distances in communication by introducing paid web hosting services.
Users View on the World Wide Web (1996-2003)
Between 1996 and 2003, people viewed the World Wide Web as a global community. The World Wide Web mainly connected people living within cities. In short, World Wide Web created a global community that provided a platform for people to share their ideologies and value systems despite their distances.
The World Wide Web encouraged socialization of people and enhanced their communication. It acted as a mode of communication where people could even share their feelings and upload images. Users imagined the World Wide Web as one that created communities around them. As a matter of fact, it created a digital community where people would not only be friends but parts of one community. This was done through the establishment of GeoCities, a social networking web hosting company that provided a platform for people to sign up on different cities. When a person joined a city, they met strangers with common interests. Neighborhood ideology was built through the creation of digital communities among residents of the same geographical locations.
An example is the GeoCities web archive, Possum Jenkins Live at Pleasant Ridge House Concerts (2013). The owner created the web page and personalized all their contents. The web page was an exemplary audio archive that recorded individual audio files. One listened to audio files by just clicking icons. The page further provided visitors with options to review, rate and comment on the page. Visitors were also able to download audio files.
People in the same digital communities were able to read the same posts and sign the same guest books. Users were not restricted to access information about their cities. When one joined a city, they became part and parcel of it. The World Wide Web was influential in connecting people who shared common communal values and interests. The World Wide Web had created a platform where people sharing the same interests and core values could communicate.
Personalizing of web pages was also a common phenomenon. People employed the use of layout, graphics and color on their customized web pages to communicate to others in the cities (Gauntlett, 2000). An example of a personalized GeoCities is an animated cartoon on boxing, Boxing tonight (2010). The owner of the webpage created a video clip of animated cartoon on boxing and uploaded it on own webpage. The producer also provided a downloading option. Through GeoCities, the World Wide Web enhanced communication between people.
GeoCities and Older Media Forms
Older media forms included AOL, a web hosting company that preceded the World Wide Web. AOL introduced online services in 1980s. It used proprietary software and graphical Interface to emphasize communication among its members. In its contribution to the online world, AOL started by offering online games through its software PlayNet.
Tripod was a web hosting company that offered free and paid services in early 1990s. The company also had a blogging tool and a site builder for page editing. Users who paid for the services accessed domain names, additional disk space, email and the web. There was continuity between older media forms and GeoCities. Both AOL and Tripod provided a platform for users to relate to each other. Tripod, just like GeoCities, created a digital community by providing its users with free web hosting services. It also offered blogging and interactive services for users. This feature was continued in GeoCities. GeoCities proceeded to reward those who paid for services with better and unique web hosting services. There was a discontinuity in the usage of software systems to provide online services in the GeoCities period.
GeoCities and the Current Web Culture
Both GeoCities and the current web culture have a platform that allows people to design the look of their web pages and share information. The current web culture continues to display similar characteristics with GeoCities. This has created continuity in the transformation of social media. GeoCities offered users the freedom to upload content to their web pages and create animation. Likewise, the new generation social networking sites have options that allow users to create animations, upload and share pictures with the online community. The two social sites create a room for exchanging information and ideas among people.
However, the development of social networking has seen the creation of discontinuities between GeoCities and the current web culture. Unlike in GeoCities, the current web culture does censor information, including video and visual elements posted. Social networking sites monitor and censor information sent through their web pages. The current web culture has also advanced its tools to include a users needs. Users used customized tools to effect changes in their web pages. For example, Boxing tonight (2010), a GeoCities webpage, was customized to fit into the demands of the creator. The owner uploaded audio and video files up to the quantity that satisfied them, and shared the files with friends. The files could also be downloaded.
The current web culture is stricter on the content being shared among users. In short, communication is censored to fit into the web hosting companys ethical and social standard. Facebook, a recent social networking site, does not give complete freedom to its users. Content posted on its web pages is censored, and in some cases, if an attachment or a file does not satisfy the companys requirements, it is removed from public view. We encounter words like the attachment is no longer available. It has been flagged as abusive and offensive. This indicates that communication between people is monitored through various channels, a culture that was not practiced by GeoCities.
The current web culture determines relationships among people. Connection between two people has to be accepted by either party through sending a friend request. Facebook, for example, permits its users to block or add friends. It is possible for someone to add a friend and block them from accessing particular information on their web pages. This is a new development that discontinues the culture in GeoCities that allowed all people of a city to share and access information without limitations.
Development of social networking from GeoCities to the current web pages has seen a tremendous increase in users pay. Today, all users on any social networking sites pay for the Internet. In fact, uploading and downloading audio and visual files are more expensive. The communication between people has also been diversified by the introduction of numerous social networking websites that cater to the same purpose.
Conclusion
Social networking through the Internet dates back to the introduction of AOL and online games. With the lapse of time, social networking culture has developed, and global communities have been created. GeoCities made tremendous changes to the Internet and created digital communities.
This paper explores the rise of robots as explained in BBCs documentary Hyper Evolution: Rise of The Robots. The narrators in the video Dr. Ben Garrod, and Prof. Danielle George give their experience of robots, which have been created in an advanced form (Verma 00:57:53). Human beings have been applying them in all sections across the world. From the video, the robots look like real human beings, and they have been capacitated to act in a human way in what is known as machine learning technology powered by artificial intelligence. The rise of robotic technology has led to approximately 9 million robots today (Verma 00:02:43). Despite easing peoples lives; there seems to be a threat to human life, which is a matter of contention. However, one may wonder why there is increased usage of robots despite the underlying factors.
Hyper Evolution
Hyper evolution is the rapid application of advanced robots where human beings have created machines that achieve what human beings can do. Hyper evolution is associated with a recreation of human power in that some activities threaten the survival of human beings. Robots are important for human survival because of the tasks they do, which humans may not, and if they do, it might affect the difficulty of sustaining life. The authors argue that robots can take on more sophisticated tasks that human hands may not process at a go (Verma 00:32:15). That means robots are essential when replacing human labor where a mass workforce is required. Robots can follow their path which maximizes the potential that a bare human being may not have. For example, the authors give a scenario where robots lift car bodies. That expands their argument that machine learning technology is required for human beings survival.
Robotic technology helps humans be productive and manage to have sufficient items that are fit for human survival. The reason is that they are made like human beings with pelvic structures, meaning that they can walk like people for a long duration that would be tiresome for a normal body (Verma 00:42:12). Technology controls can make it possible to leverage units important for human life sustainability. The reason is that there is a dependence on the increased discovery of techniques that facilitate massive performance in all sectors. Therefore, hyper-evolution is a factor that will help human beings survive despite the risks involved. That enhanced relationship shall provide a basic collaboration important to run all aspects of human life, including survival.
Important Segment
While watching the video, sensitive content regarding the human workforce and massive robotic power comes around. The documentarists argue that by 2030, robots will have taken human labor by 30% (Verma 00:37:22). That trend is significantly worrying since many people will be unemployed, leading to unbalanced economic and social ties. The segment is important because technology commanding almost all sectors threatens human development regarding production capacity, a subject that can adversely change the world. On that note, the programming of machines to act on behalf of human beings leads to redundancy of mind, and people shall be glued to a computer for something that can be processed by use of natural human intelligence.
The machine learning concept is indeed helpful to society, but that comes with costs and risks that can be prevented by directly controlling what has to be done by the machines. There is a challenge if robots will be equal to humans in terms of capacity to act. For instance, the medical field, which has witnessed a wide array of technological transformations, may have a negative impact when it comes to quality of care and evidence-based clinical exposure to machines. Therefore, the segment is critical since it shows limitations that may affect the well-being of individuals worldwide.
Reason Behind the Rising of Advanced Robotic
There are various reasons why there is the rise of robot technology. According to Verma (00:11: 03), the creation of robots has been rampant since the developers believe that society is enhanced. Robots such as Erica, created by Professor Ishiguro, appear human-friendly and are helpful to human beings (Verma 00:11:41). Therefore, the robotic transformation has risen due to the need to build positive and purposeful relationships between machines and people. That is why most robots are created with human features, such as the resemblance to human faces, so that people will stop fearing the bizarre nature that may be brought along by machine learning.
Conclusion
Away from the documentary, the rise of robots has been facilitated by other aspects. For instance, the need to have increased efficiency has led to more discovery of commands that can complete repetitive tasks that can take time for human beings to complete (de Vries et al. 18). As well, advancement in technology has facilitated the rise of robots where countries such as France and Japan have been active in building their artificial intelligence hence, finding robotic power a way of achieving their visions. From the above discussion, robotic technology advances human life and, at the same time, threatens it depending on the area of focus (de Vries et al. 14). For example, in terms of job opportunities, many people may miss being hired due to machine substitutes. On the other hand, multi-production enables a sufficient flow of products that are part of human life drivers.
In contemporary occidental countries where human rights made comparatively notable progress, accessibility is one of the crucial criteria by which technologies should be judged. The disability rights movement, growing popularity of digital devices, and visual interfaces prompted the creation of assistive technologies providing access to information for excluded groups of the population. The main goal of this paper is to create a comprehensive picture of screen readers evolution and their current usability. The study results constitute a linear account of the instruments, technologies, and events that caused the advance of assistive technologies and screen readers specifically, outlining their history. The principal method used in the study is outcome-based information evaluation that helped to construct the account. The research led to conclusions apropos of the current state of assistive technologies and recommendations that can be followed to enhance accessibility.
The History of Screen Readers
Low vision is a condition that may interfere with even trivial daily activities in vexing ways. Limited peripheral and central vision, unfocused vision, tunnel vision, and augmented vulnerability to light affect how people interact with their environment on a day-to-day basis. Different visual impairments are an increasingly growing health problem, with millions of United States citizens affected. Moreover, with the current lifestyle that the majority of people adhere to, the number of affected will possibly grow. One of the challenges that low vision may pose is the usage of new technologies, computing devices in particular.
Several inventors and researchers contributed to the progress in the domain. Among them is Jim Thatcher, the creator of the first screen reader (Ademi & Ademi, 2018). Alistair Edwards advanced the software with Soundtrack, one of the pioneering word processors with an auditory interface, and Ted Henter, a programmer who founded Freedom Scientific, a company that revolves around assistive technologies, created JAWS (Lazar et al., 2007). Despite the advancements, several gaps are still present in the research of assistive technologies, such as a lack of coordination between medical professionals and developers of assistive devices. This was noted by the lack of literature devoted to the topic in the last years. This paper aims to synthesize the history of screen readers that contributed significantly to the ever-evolving domain of assistive technology.
Materials and Methods
Considering the specifics of the research paper, its focus on the history of screen readers, and the assistive technologys evolution, outcome-based information evaluation seems like a suitable method for the research. It serves to estimate to what degree historical advances in combination with technological ones resulted in the creation of screen readers and their current state of development. An array of research papers and scholarly articles from different periods is the principal material basis for the paper, as it allows for the construction of a historical account based on evidence from each temporal stage.
Results
TTS and Speech Synthesis as the Precursors of Screen Readers
With the rapid growth and expansion of digital technologies over the second half of the last century, computing devices have seemingly reached almost all spheres of human existence. Being able to access digital information may even be one of the primary criteria for life quality and the right of every citizen. Nevertheless, modern technologies, as an average user knows them, are unable to satisfy the needs of a large number of society members, even in developed countries (Evans & Blenkhorn, 2003). The struggle for accessibility and the origins of screen readers are traced back to the pre-PC era to the elaboration of Text-to-Speech (TTS) technologies, which were actively pursued even before the Second World War (Ademi & Ademi, 2018). Nowadays, screen readers became one of the most utilized subtypes of assistive technologies.
Text-to-speech rendering is one of the primary methods that set in motion the research that finally made screen readers possible. Although screen readers differ significantly from TTS, the principles of speech synthesis act as a cornerstone for the history of the technology in question (Edwards, 1989). Ademi and Ademi (2018) state that the TTS process is based on the artificial production of speech, which first attempts are documented in the eighteenth century. Nevertheless, the focus on the use of TTS in screen reading technologies to enhance accessibility was formed during the last century.
Speech synthesis is another technology that provided a foundation for the creation of screen readers. Christian Kratzenstein produced the first mechanical speech synthesizer in 1779 the machines performance was limited to five long vowels (Nguyen et al., 2018, p. 349). This result was significantly improved over time in other models. The introduction of Voice Operating Demonstrator (VODER) in 1937 by the Bell Telephone Laboratory a manual electronic machine created by Homer Dudley was the next step towards screen readers (Nguyen et al., 2018). It should be noted that at that stage, the voice quality and the level of intelligibility were rather low, and the usage of the instrument was overcomplicated it even had a pedal to modulate the speech (Nguyen et al., 2018). These two devices and the in-between ones demonstrated the ability of technology to generate a voices semblance and inspired the future of accessibility and assistive technologies.
The history of the speech synthesizing devices is not limited to the borders of the United States, although a significant part of inventions was made here. Nguyen et al. (2018) note that apart from synthesizing English, an array of different, predominantly European languages were involved in the process. For instance, software that could synthesize Italian named MUSA was introduced in 1975 (Nguyen et al., 2018, p. 350). The interest in other languages and the progress made abroad pushed the Bell Telephone Laboratory to produce the first, not monolingual TTS mechanism in 1997, based on their research on multilingual synthesis (Nguyen et al., 2018, p. 351). In this way, the history of screen readers and the advance in the area extends over several countries, incorporating research and technologies directed at various languages.
The Creation of First Screen Readers
The discussed above mechanisms serve as the basis for the first screen readers that are founded on the principles of transforming text into speech. Ademi and Ademi (2018) describe their usage in this way, using keys combinations, and the user can move through the user interface and read all the texts available on the screen. The user can use the keypad to enter the text that is transformed into speech by the screen reader and is read aloud (p. 1334). Jim Thatcher is the inventor of IBM Screen Reader (1986), which is considered to be the first one and was destined for the DOS operating system (Ademi & Ademi, 2018). It was followed by IBM Screen Reader 2, which, unlike the first IBM screen reader, had a graphic interface (Mynatt, 1997). The two applications, despite their limitations, gained particular popularity inside the IBM company.
Initially, the software was designed for personnel with vision impediments at IBM and primarily was not commercial. The users could control IBM Screen Reader 2 with numeric keys this fact made it the first one of its kind (Ademi & Ademi, 2018). IBM and IMB 2 became the pioneers in the area of assistive technologies that focuses on rendering textual content in speech. Their expansion was prompted further by the growth of activism, primarily concentrated on the rights to access information (Scotch, 1989). IBM and IMB 2 screen readers were a vast improvement compared to the speech synthesizers produced by the Bell Telephone Laboratory in matters of speech quality and intelligibility.
By the end of the nineties, a variety of technologies designed to help visually impaired users emerged. Frank Audio data produced screen readers that used a modified keyboard to determine the part of the text to be spoken with a couple of sliders, with one moving horizontally and the other vertically (Edwards, 1989). Vert is an example of screen reader adaptation that was obtainable in several versions they differed in quality of the performance and price (Edwards, 1989). The soundtrack, developed by Edwards (1989), represents an attempt to develop a word processor with an auditory interface and had two versions. According to the creator, the initial evaluation of the product revealed that it was difficult to navigate since the users had trouble recalling the arrangement of the internal elements in the windows (Edwards, 1989). Nevertheless, Soundtrack was declared usable, and Soundtrack 2 came with several improvements (Edwards, 1989). The screen readers produced by Frank Audio data, Vert, and Soundtrack facilitated the lives of visually impaired users of the late twentieth century and laid the ground for their more advanced counterparts.
Societal Context of the Era
The disability rights movement generated suitable conditions for the expansion and commercialization of assistive software and applications. The creation of the first screen reader in the middle of the eighties followed the passage of the 1973 Rehabilitation Act (Scotch, 1989). The act was supposed to protect disabled groups from discrimination encountered in programs and employment practices provided by the federal government or with its assistance. Section 508 centered on the right to access governmental sites and their usability for the people with impairments the act established essential guidelines for accessibility (Olalere & Lazar, 2011). Olalere and Lazar (2011) emphasize that in U.S. law, the notion of accessibility is still defined by Section 508. Even though it does not concern private organizations, Section 508 motivates companies that are interested in cooperating with the federal government to enhance their web accessibility.
Recent Developments
Section 508 had a noticeable impact on the sphere of digital accessibility it served as an incentive for its market to grow and encouraged attempts to screen readers improvement. For instance, BrookesTalk is a browser instrument that structures the content on a page, and thus creates a coherent digital environment for visually impaired users (Lazar et al., 2007). JAWS is a screen reader designed by Freedom Scientific, a company that centers around digital accessibility products (Lazar et al., 2007). The instrument allows its users to access a page in a non-sequential way by incorporating commands that enumerate available frames (Leporini & Paterno, 2004). VoiceOver has recently become another major player in the assistive technology market, following JAWS. On the other hand, NVDA is distinguished from the enlisted screen readers by its free access. Window-Eyes is another prominent screen reader developed by GW Micro both Window-Eyes and JAWS work based on Microsoft Windows and attract the majority of visually impaired users (Lazar et al., 2007). In this way, establishing legislation concerning digital accessibility seems to be one of the most salient stimuli for the wider variety in the screen readers market today.
Discussion
The assistive technology uplift that happened in the eighties is a result of years of activism and policy enforcement. The IBM company improved screen readers and made them ready for broader usage and shifted towards their commercial manufacturing. Despite the accomplished progress, digital accessibility is a sphere that needs extra efforts and attention from Web developers. For instance, Section 508 compliance remains a problem almost fifty years later. The research conducted by Olalere and Lazar (2011) shows that, on average, 2.27 accessibility guidelines are violated per the home page of a federal website (p. 307). Web accessibility is even equalized to the notion of usability when people with different disabilities are concerned it embodies the idea of inclusion and equal opportunities (Freire & Paiva, 2007). A memorandum on Transparency and Open Government that took place in 2009 requires openness and governmental information availability to all citizens (Olalere & Lazar, 2011). Nevertheless, it is still violated by the disregard for guidelines provided by Section 508.
Recommendations
Compliance with the Rehabilitation Act of 1973 is one of the major steps that could help in digital accessibility promotion. Requiring each federal site to have an accessibility statement is one of the ways to promote it (Olalere & Lazar, 2011). The inclusion of information about the issue into public discourse, for instance, by dedicating research to the matters concerning digital technologies promoting accessibility, is another method to bring attention to the problems that need to be solved. Universal usability is still far from being reached, as it takes visually impaired users longer to accomplish tasks on the Web (Lazar et al., 2007). This signals a lack of understanding from the Web developers of the problems that a part of users encounters daily. Web accessibility should be incorporated into general Web education the process in which screen readers could play a role, providing not visually impaired users with an experience that some face daily.
References
Ademi, L., & Ademi, V. (2018). Visually impaired students education through intelligent technologies. Knowledge International Journal, 28(3), 11331138.
Edwards, A. (1989). Soundtrack: An auditory interface for blind users. Human-Computer Interaction, 4(1), 4566.
Evans, G., & Blenkhorn, P. (2003). Architectures of assistive software applications for Windowsbased computers. Journal of Network and Computer Applications, 26(2), 213228.
Freire, A., & Paiva, D. (2007). Using screen readers to reinforce web accessibility education. ACM SIGCSE Bulletin, 39, 8286.
Lazar, J., Allen, A., Kleinman, J., & Malarkey, C. (2007). What frustrates screen reader users on the web: A study of 100 blind users. International Journal of Human-Computer Interaction, 22(3), 247269.
Leporini, B., & Paterno, F. (2004). Increasing usability when interacting through screen readers. Universal Access in the Information Society, 3(1), 5770.
Mynatt, E. D. (1997). Transforming graphical interfaces into auditory interfaces for blind users. HumanComputer Interaction, 12, 745.
Nguyen, T. V, Nguyen, B. Q., Phan, K. H., & Do, H. Do. (2018). Development of Vietnamese speech synthesis system using deep neural networks. Journal of Computer Science and Cybernetic, 34(4), 349363.
Olalere, A., & Lazar, J. (2011). Accessibility of U.S. federal government home pages: Section 508 compliance and site accessibility statements. Government Information Quarterly, 28(3), 303309.
Digital immortality as a logical step in human evolution is one of the possible outcomes of gradually increasing technological advance. This, and the idea that the human race cannot evolve any further in the traditional sense lead to assuming that the next step would be to transcend beyond human bodies of flesh to something much more complex, although much simpler substantially. Savin-Baden, Burden, and Taylor (2017) define digital immortality as the continuation of an active or passive digital presence after death (p. 178). This concept may sound simple, but it is much harder to achieve than to define. It would require a vast amount of computing power to recreate just one human brain with all of its complexity and most importantly ability to think in unconventional ways.
The problem with this concept is that it is highly likely to be impossible. Parkin (2015), in his article published on BBC website, states that the idea that a memory could prove so enduring that it might grant its holder immortality is a romantic notion that could only be held by a young poet, unbothered by the aches and scars of age (para. 7). Although there is no way to be sure about such things as what to expect from the near future, many researchers agree that technological progress will continue to gain speed and scale as the history unfolds rapidly. If the concept of digital immortality proves to be possible and achievable, humankind may solve a significant number of problems that may lead to extinction. However, if human ceases to be human in a traditional way, will it be possible to say that the race did not become extinct? Right now, there is only a possibility to theorize.
References
Parkin, S. (2015). Back-up brains: The era of digital immortality. Web.
Savin-Baden, M., Burden, D., & Taylor, H. (2017). The ethics and impact of digital immortality. Knowledge Cultures, 5(2), 178-196.
Mobile communication has changed the way people used to communicate. The advancements made within the technology field can be attributed to these changes. Communication has been joined to mobility due to this, thus people are finding it easier to interact without necessarily moving over long distances.
There has been increased growth within the wireless industry both in terms of subscribers and mobile technology (Lemstra, Hayes and Groenewegen 47). The use of fixed lines has reduced over the years. In its place is an increasing number of fixed line subscribers.
Mobile cellular users were four times more than users of fixed lines by the year 2010. Development of cheaper mobile phones also facilitated the growth. More people have access to mobile phones due to their ease of use and maintenance.
Background
Mobile phone communications over cellular network work through the use of packet data. This is a wireless data transmission technology that sends digital radio signals through wireless packet switching. The cell phone is a device that can be considered as a simple radio transmitter and receiver. It is also equipped with an omnidirectional antenna.
The cell phone is able to transmit radio signals to a nearby cell tower. The distance of the cell phone from the cell tower can vary greatly depending on the technology used (Steinbock 22). With strong wireless technologies, the distance can reach distances of up to 250 miles. A service box is also attached to the cell tower. Thus, both the cell tower and the service box comprise a base station.
The base station is then connected to a switching station via cables. The switching center deals with establishing a connection between one cell phone and the called number.
The developments in communication networks resulted from the realization that networks should be efficient. Thus the designs of these networks should also encourage efficiency in the networks functionality. This led to the network design and optimization initiatives by various telecommunications stakeholders. The stakeholders include network operators, device vendors, wireless network experts and governments.
Thus, they frequently meet and discuss various specifications that would guide a new generation of wireless communication network. Once a specification has been agreed upon, various parties come together and implement a given wireless network by using an established specification.
Technological progression
Wireless technologies have been characterized with the shortest time in which they have advanced and evolved. The latest wireless technology is fourth generation (4G). Wireless technologies have followed a trend whereby efficiency and performance within the mobile environment were the main goals. The first generation (1G) was able to achieve basic mobile voice.
Through this, mobile devices could communicate with each other over long distances. Thus, it was known as the FM technology and resulted in mobile radios that could access radio signals within a wide range. The network was used during the 1980s (ABI Research 29).
The second generation (2G) then evolved which was characterized by better coverage and capacity. It was able to solve the problems witnessed by the 1G networks. The 2G networks were considered digital systems.
Thus, such services like short messaging and data became available. Global System for Mobile Communications (GSM) and CDMA2000 1xRTT are the basic 2G technologies.
Sometimes CDMA2000 1xRTT is referred as a 3G technology as it meets the minimum speeds expected of a 3G network. EDGE, a method of data access over the cellular network, is also considered as 3G though it is still 2G. The 2G networks became available in the 1990s (ABI Research 31).
Thereafter, the third generation (3G) network arose and it sought to transmit data at higher speeds. This was done so that mobile networks could be as fast as mobile broadband. Various specifications were established by the ITU to detail what 3G networks are characterized by.
This network was expected to provide network speeds of up to 2Mbps while indoors. At mobile speeds the throughput was at 144kbps and 384kbps at pedestrian speeds. 3G networks are comprised of CDMA2000 EV-DO, UMTS-HSPA and WiMAX (Sauter 111). The agreements on the requirements of a 3G network were established by a project called International Mobile Telephone 2000 (IMT-2000)
The fourth generation (4G) was later developed which brought with it better advancements within the telecommunication field. This network catered for multiuser environments through the introduction of advanced mobile services. It also supported both fixed and mobile networks. Moreover, the 4G network could handle a varied range of data rates and applications with high mobility.
Adaptation of 4G networks
3G upgrades led to the development of LTE (Long Term Evolution) and WiMAX networks. Thus, the cellular industry has easily adapted to these new standards. In places where the technology is functional, older 2G and 3G networks are still used. This is because many people do not have mobile phones that can access 4G networks. Thus, they are forced to depend on the older versions as subscription grows.
Thus, LTE is not universal. It is still a new technology so most cell phones with 4G capabilities are also backward compatible with older networks like 3G and 2G (Saboji and Akki 80). The principles established by IMT-Advanced guide the characteristics and deployment of 4G networks.
4G networks are expected to operate with a very high spectral efficiency. Moreover, 4G networks should be operable within 40 MHz radio channels and more. Thus, 4G networks can reach 100MHz radio channels. Many countries are expected to roll out 4G networks by 2020.
4G networks have a throughput rate totaling 1.5 Gbps. This is attributed to their spectral efficiency which during its peak is at 15 bps/Hz. 4G arose from the realization that 3G networks may become overwhelmed by applications that require intensive bandwidth.
4G networks have speeds reaching ten times of those provided by 3G. The first commercially available 4G standards were WiMAX offered in the U.S. and LTE (Lemstra, Hayes and Groenewegen 56).
The development of new technologies within the communication industry was triggered by increasing user demand. In comparison to 3G, the technological advancements arising from 4G were to combine all previous mobile technologies that exist. Thus, 4G is characterized by a technological progression where it advances and improves all previous mobile networks.
These include GSM, General Packet Radio Service (GPRS), Wi-Fi, Bluetooth and IMT 2000. Thus, 4G has been referred to as all-IP as it combines all technologies that existed. This is done to ensure harmonization of all services to be provided.
In comparison to other networks, the functionality of 4G networks is distributed amongst a set of gateways and servers (Agbinya 34). Thus, the network is cheaper in deployment and application.
Social progression
People depend on communication networks for the consumption of both video and television streaming services. 4G networks have made these services faster, thus it has had a significant impact on social interactions. It has also become easier to make video calls due to the speed and latency of 4G networks.
4G networks led to performance improvements in various multimedia applications. For instance, it has facilitated online gaming due to the increased speeds. Moreover, more people are able to stream videos online (Steinbock 36).
Online video games have been significantly impacted by 4G networks. More role playing and social games are being developed by gaming companies to cater to hardcore online gamers. This has also facilitated connectivity whereby more people can easily play and download games. Thus, it is easier and cheaper to purchase games online instead of using hard copies. Moreover, real time interactions through games have become easier.
This is especially important for multiplayer games where real time interactions are important to enjoy the game. The online entertainment industry has also grown due to the introduction of 4G networks. With the increased speeds, high quality videos can be uploaded online and also be watched. Thus, 4G has facilitated online sharing of videos as it takes less time to download a high quality video.
HD movies have also become the norm, and this has led to increasing growth of the online entertainment industry. Many companies are now distributing movies online as 4G ensures that consumers will enjoy high quality movies without loss of clarity (OECD 14).
Video communication has also been impacted. Many business organizations and individuals regard video communication highly as it makes communication over long distances realistic. Thus, 4G networks have provided a cheaper alternative that is faster and effective. 4G networks have proved to be advantageous socially.
The merge of various technologies that comprise 4G has enabled cheaper or free roaming (Agbinya 38). Thus, a subscriber can use their home countrys 4G network from wherever they are. This is because 4G networks are expected to have a global coverage. The differences that existed due to 3G networks will be minimized and users will be able to roam from any place in the world with 4G coverage.
Socially, 4G networks have facilitated collaborative efforts among professionals and intellects. This is because many users can access the network without causing a strain or decrease in speeds. This makes it easier for many people to collaborate on a specific issue. The current 3G networks are characterized by network congestion and strain constantly.
Many different mobile cellphones are competing for the same resources too. With 4G, devices will co-operate instead (Rumney and Agilent Technologies 32). This means that mobile phones will work in a similar way like broadband routers. Thus, mobile devices will become part of the cellular infrastructure itself. The users of a given network will have control of the network as they will own part of the network.
Thus, the network will be able to determine how much data is required by the user and will adapt accordingly. Thus, network capacity will be shifted based in the demands of a given user within the network. Issues like dropped calls due to congestion will be avoided because the mobile phone will have access to better capacity when need be (Kumar et al. 70). VoIP (Voice over IP) services will also improve.
Previous cellular networks like 3G and 2G were characterized by flat data rates. Operators used a flat data rate for charging their consumers data needs. With 4G, flat data rates will no longer be used. Thus, 4G networks will use a tiered model of pricing. Initially, users paid a flat rate and this led to strain on the operators network. Moreover, users did not fully gain from the networks use due to constant congestions.
With tiered rates, the operators will be able to gain. Initially, consumers will be uncertain about tiered rates. This is because many of them will reason that they will be overcharged for their internet use. This is not true as tiered rates encourage a better user experience. With 4G networks in place, a consumer will be able to use as much data as they need based on an established tiered system (Olsson and Mulligan 134).
4G networks will also enable more people to have access to faster network speeds. Broadband requires physical infrastructure to be established. This is depended on physical region and is also limited by costs. With 4G, network access will depend on users who have mobile devices with 4G connectivity. Thus, more people will be able to utilize 4G networks using their phones.
Faster networks have led to development of network sharing technologies known as wireless hotspots. This is a situation where a single mobile subscriber can share their internet connection to a variety of mobile devices (Lemstra, Hayes and Groenewegen 112). Thus, more people can enjoy the benefits of a faster networking speed through this mechanism.
4G networks have led to an intense change within the society. The social implications have been profound. Many people have found it easier to communicate through 4G. Social progression with 4G has led to a lasting impact on intellectuals and students. Leaning has become easier as more students use their internet for their educational needs. Social changes are also seen in the widening of the generation gap.
With faster internet access speeds, the generations that live now and in the future will be different from the generations that existed during the 2G period. Thus, rifts can be witnessed among the younger and older generations. Knowledge is considered as power, thus the younger generations can be seen as to be more powerful than the older generations.
Most western generations with improved data access speeds or have 4G are characterized by this drift. 4G has enhanced these social changes. Its impact on personal life can be compared to the effects of computers to businesses (Olsson and Mulligan 88). Societal changes will continue into the future as younger populations progress due to ease of accessing information.
Economic progression
4G cell services have also led to various economic progresses. With better and faster methods of communication, the growth of the economy has been facilitated. 4G network services has led to establishment of various companies that have had positive effects on the economy. Some of these companies include Amazon where commodity transactions are conducted online.
4G networks are advantageous to the economy, especially in cases where a business depends on communication and its employees travel constantly. Efficient and effective communication infrastructure can ensure that a business is successful. Thus, 4G networks will be advantageous. 4G networks lead to changes in ways of doing business, just as 3G led to changes in how business was conducted.
In many countries, broadband and communication networks are considered national infrastructure just like energy and transport. Thus, many countries will efficient and fast communication networks witness an increase in economic growth. Adoption of 4G networks will encourage a countrys economic growth (OECD 21). There are various advantages that arise from 4G connectivity.
Business communication will occur efficiently. Moreover, businesses will be encouraged to establish themselves in a country or region that has good network connectivity that is considered fast.
Countries that are currently trying 4G networks are characterized by stronger economies than countries which have yet to start 4G infrastructure development. New infrastructure is required for 4G services. Moreover, users also need to have a 4G enabled cellphone as to access the service. Countries with already established 4G networks have witnessed economic growth due to the effects of faster communication networks.
Evolution of cellular networks
The evolution of 4G cellular networks began with the adoption of 2G networks. The European Commission made GSM compulsory in Europe in the early 1990s. This is attributed to its lower power consumption. Thus, it allowed the production of smaller handsets which had greater security and a long battery life.
Nokia, Ericsson, regional wireless carriers and operators seized the moment to guarantee their domestic advantage within foreign markets. On the other hand, licensing issues were being faced by the U.S. as it tried to find the best network for its region. By 2000, Western Europe was home to the largest cellular market (ABI Research 32).
Continuous historical advancements led to the establishment of 4G networks. Before the change from voice to data, mobile evolution was always seen as movement from analog to broadband. At the start, the objective was to achieve a global standard for the 3G era (UMTS). This was not easy to achieve as Qualcomm came up with the CDMA technology that was considered more efficient than UMTS.
Despite this, GSM still determined the evolution of platforms until the internet became dominant and important. After many regional disagreements and dialogues, a single flexible standard was adopted by all parties (Saboji and Akki 82).
The adoption of 3G began much earlier in Japan. NTT DoCoMo, a major operator in Japan, was the pioneer of the transition in 1999. This occurred two years before the official implementation of the first 3G networks around the world.
With the development of new cellular platforms, the core technologies have resulted in an increase on the capacity held by a spectrum. Thus, stakeholders in the mobile industry have aggressively demanded for improved wireless networks that are of high speeds. By 2005, a large percentage of the world was yet to adapt to 3G networks
Conclusion
In conclusion, 4G networks have evolved for a long time since the adoption of 1G network in the 1980s. The need for faster speeds and efficiency has led to increased developments within the communication field. This has led to constant evolution as various stakeholders in the communication industry try to encourage faster network speeds.
There has been an increasing demand from mobile subscribers which influenced researchers to come up with new specifications that guide 4G networks. Through history, there has been a trend to merge existing technologies and make them efficient. 1G networks were able to guarantee basic mobile voice and signaled the beginning of evolving mobile networks.
The 2G networks were then characterized with better coverage and capacity. This was followed by 3G which led to improved speeds and finally 4G which will ensure greater improvements and benefits. 4G systems will function as complete IP-depended wireless internet networks.
It will be able to provide various telecommunication services, inclusive of mobile services that are advanced. Thus, this essay provides an analysis of 4G networks and its technological, social and economic progression.
Works Cited
ABI Research. 4G Networks to Handle More Data Traffic Than 3G Networks by 2016, Says ABI Research. Business Wire (English). 27 October (2013): 22-36. Print.
Agbinya, Johnson Ihyeh. Planning and Optimization of 3G & 4G Wireless Networks. Denmark: River Publishers, 2009. Print.
Kumar, Amit, Yunfei Liu, Jyotsna Sengupta, and Divya Bhaskar. Evolution of Mobile Wireless Communication Networks: 1G to 4G. International Journal of Electronics & Communication Technology 1.1 (2012): 68-72. Print.
Lemstra, Wolter, Vic Hayes and John Groenewegen. The Innovation Journey of Wi-Fi : The Road Toward Global Success. Cambridge: Cambridge University Press, 2010. Print.
OECD. OECD Communications Outlook 2013. Paris: OECD Publishing, 2013. Print.
Olsson, Magnus and Catherine Mulligan. EPC and 4G Packet Networks : Driving the Mobile Broadband Revolution. 2nd. Burlington: Elsevier Science, 2012. Print.
Rumney, Moray and Agilent Technologies. LTE and the Evolution to 4G Wireless: Design and Measurement Challenges. 2nd. New York, NY: Wiley, 2013. Print.
Saboji, Skumar V. and Channappa B. Akki. A Survey of Fourth Generation (4G) wireless Network Models. Journal of Telecommunications Management. 2.1 (2009): 77-91. Print.
Sauter, Martin. 3G, 4G and Beyond : Bringing Networks, Devices, and the Web Together. 2nd. West Sussex: Wiley, 2013. Print.
Steinbock, Dan. The Mobile Revolution: The Making of Mobile Services Worldwide. London: Kogan Page, 2005. Print.
Television or TV was privately demonstrated, in a rudimentary way, on 26th January 1926, by John Logie Baird, a Scottish electrical engineer. On that day, he showed his crude, flickering, half tone televised images to a group of approximately 40 scientists from the Royal Institution in London.
Baird was one of a small number of lone investors who, during the first half of the 1920-30 decade, worked determinedly to devise a system of seeing by electricity. Other experiments included those of Jenkins of the United States, Belin of France, and Mihaly, a Hungarian working in Germany.
By any criterion, Bairds achievement was an outstanding one. For nearly 50 years from 1878, scientists, engineers and investors in the UK, the US, France, Germany, Russia, and elsewhere had sought to find a solution to the problem of transmitting, electronically, moving images from place to another.
Their objective had been to implement a method which would enable vision signals to be sent to a distant observer in a way similar to that which had permitted sound signals to be transmitted. In 1877, Alexander Graham Bell had invented the telephone which allowed hearing by electricity or telephony, to be readily introduced.
Soon afterwards, proposals were advanced for systems that would enable seeing by electricity or television, to be realized. However, the problem of distant vision was an altogether different order of complexity compared with that of telephony. Success eluded many people including laymen and university professors who engaged in advancing ideas in order to provide solutions needed at the time.
Evolution from Black and White Times
According to Gerbarg, television has been evolving since the first black and white television transmissions were witnessed. Over the years, however, persistent technological developments and desire for high quality viewing led to improved television standards. The first public demonstration, on a black and white television, was conducted in 1927 while that on color television was demonstrated in June 1929.
During the early days of commercial broadcasting, many countries across the world gave serious attention to the advancement of the black and white technology. In the United States, the first all industry standards were recommended by the National Television Committee in the early 1940s.
Coaxial cable entered the world of television in 1936, when the first experimental transmission took place between New York and Philadelphia. This experiment proved to be highly successful in terms of transmission performance as multiple frequencies could be transmitted over the same shielded medium.
After invention, black and white TV technology quickly spread throughout the world. At the same time, scientists and engineers continued efforts to invent color television, a daunting task back then. During the early days, this was a very big challenge considering that. Eventually, engineers were able to make color televisions compatible with the existing monochrome technology, ushering the era of color television.
The Essential Ingredient
The discovery of the photoconductive properties of selenium by Willoughby Smith and his assistant in 1873 completely changed the things in the history of television. Prior to 1873, no usable effect had been demonstrated which related changes of light flux to changes of electrical quantities, a necessary requirement in any television system.
From this fact, the history of television dates from Willoughby Smiths letter to the editor of the Journal of the Society of Telegraph Engineers. However, such a view would ignore all the work undertaken from 1843 on picture telegraphy, the process by which images of documents can be sent electrically from one place to another, and which had an influence on early distant vision schemes.
In picture telegraphy and television, scanning and synchronizing are fundamental operations. Whether the object, whose image is to be transmitted, is one of a two dimensional form as in picture telegraphy, or of a three dimensional nature as in television, it is crucial in practice for it to be analyzed, point by point using a scanning beam, and for the received image to be synthesized by another scanning beam synchronized to that at the transmitter.
An important difference between picture telegraphy and television is the speed of scanning. In television engineering, an image must be scanned in a fraction of a second whereas in picture telegraphy practice, the duration of a scan can be several minutes.
The principles of picture telegraphy and television, although not commercially adopted until the 20th century, were slowly evolved during the 19th century, a century in which the display of images, by various means, fascinated all who saw them. Photography, cinematography, seeing by electricity, and picture telegraphy all had their beginnings during this time.
The first three of these are concerned with the capture of an image of a static or moving scene or object and, with picture telegraphy, the subsequent display of that image to one or more persons. There is a symbiotic relationship between the mediums of picture telegraphy and television as well as still and cine photography.
According to Abramson, the knowledge and experience gained from experiments on picture telegraphy during the early formative stage of television was quite beneficial to the advancement of television technology.
The Advanced Television
In the early days, television used to be a simple affair. It was a technically highly standardized medium, with fairly similar organizational structures, content types, and business models. It provided a nationwide content delivered by national networks, distributed regionally by television stations with some local programming thrown in, and with either advertising or governmental funding as its economic model.
Today, however, television is getting quite complicated and varied. According to Gerbarg, four generations of picture quality can be distinguished. The Pre-TV, which was the first generation, was the exploratory stage of television experiments of the 1920s and 1930s by Baird, Zworykin, and Fansworth. Bairds 1926 video image had 30 lines of picture resolution and projected a crude picture.
The second generation was associated with the analog TV which had 525 lines. In the United States, an average of seven broadcast channels could be operated. The flow of information was synchronous and was sent through a shared medium.
The third generation was characterized by digital TV. After the 1970s, analog TV broadcast branched out and was distributed over cable and satellites. In time, they became digitalized. Similarly, broadcast TV became digital in the late 1990s with standard and high definition TV emerging and entirely replacing analog transmission by 2009. Cable TV and satellite created alternative TV transmission infrastructures around the same period.
Today, cable TV uses pipes of approximately 3 Gbps, about 75 times more than the actual physical terrestrial broadcast spectrum used in any locality.
This extra transmission capacity was used first in a horizontal fashion of widening traditional quality channels and led to a narrow casting in terms of content and audiences. After a while digital technology was extended to include a vertical deepening of the channel. HDTV is one example that displays twice the number of horizontal lines, as well as a wider line and more bits per pixel.
Individualized TV, which is the fourth generation of television, is presently emerging with Internet TV and mobile TV as its main manifestations. They are dependent on fiber, coax, or wireless. Together, they increase enormously the individual transmission capacity in both upstream and downstream directions. This raises the number of channels horizontally, but also enables individualized, asynchronous and interactive TV.
Mobile TV also creates a ubiquitous availability of such individualized content. Internet TV on the other hand permits user generated content which leads to greater networking possibilities among users.
This has led to a two way transmission, moving from narrow casting to individual casting, and to user generated content sites such as YouTube. In bit terms, such low resolution content translates to about 300 Kbps, giving a very low quality.
Conclusion
Certainly, television is becoming too big, too advanced, and too important for one-size-fits all medium. As time goes by, television diversification will continue both horizontally and vertically.
Horizontal diversification includes more standard quality programs and channels and more specialized content in standard quality. Vertical diversification implies a variety of quality levels, from cheap low resolution to highly enriched, immersive, and participatory TV. For commercial content, despite the lowering of cost for a given technology, competition and user expectations will still drive the production cost to even higher levels.
Barnouw, Erik. Tube of Plenty: The Evolution of American Television, New York: Oxford University Press, 1990. Print.
Burns, Richard. Television: An International History of The Formative Years, London, UK: The Institution of Electrical Engineers, 1998. Print.
Cesar, Pablo and Chorianopoulos, Konstantinos. The Evolution of TV Systems, Content, and Users toward Interactivity, Hanover, MA: Now Publishers Inc, 2009. Print.
Cianci, Philip. High Definition Television: The Creation, Development, and Implementation of HDTV Technology, North Carolina: McFarland & Company, 2012. Print.
Gerbarg, Darcy. Television Goes Digital, New York, NY: Springer Science & Business Media, 2009. Print.
Horak, Ray. Telecommunications and Data Communications Handbook, Hoboken, NJ: John Wiley & Sons, 2012. Print.
Libbey, Robert. Signal and Image Processing Sourcebook. New York, NY: Springer, 1994. Print.
In contemporary occidental countries where human rights made comparatively notable progress, accessibility is one of the crucial criteria by which technologies should be judged. The disability rights movement, growing popularity of digital devices, and visual interfaces prompted the creation of assistive technologies providing access to information for excluded groups of the population. The main goal of this paper is to create a comprehensive picture of screen readers evolution and their current usability. The study results constitute a linear account of the instruments, technologies, and events that caused the advance of assistive technologies and screen readers specifically, outlining their history. The principal method used in the study is outcome-based information evaluation that helped to construct the account. The research led to conclusions apropos of the current state of assistive technologies and recommendations that can be followed to enhance accessibility.
The History of Screen Readers
Low vision is a condition that may interfere with even trivial daily activities in vexing ways. Limited peripheral and central vision, unfocused vision, tunnel vision, and augmented vulnerability to light affect how people interact with their environment on a day-to-day basis. Different visual impairments are an increasingly growing health problem, with millions of United States citizens affected. Moreover, with the current lifestyle that the majority of people adhere to, the number of affected will possibly grow. One of the challenges that low vision may pose is the usage of new technologies, computing devices in particular.
Several inventors and researchers contributed to the progress in the domain. Among them is Jim Thatcher, the creator of the first screen reader (Ademi & Ademi, 2018). Alistair Edwards advanced the software with Soundtrack, one of the pioneering word processors with an auditory interface, and Ted Henter, a programmer who founded Freedom Scientific, a company that revolves around assistive technologies, created JAWS (Lazar et al., 2007). Despite the advancements, several gaps are still present in the research of assistive technologies, such as a lack of coordination between medical professionals and developers of assistive devices. This was noted by the lack of literature devoted to the topic in the last years. This paper aims to synthesize the history of screen readers that contributed significantly to the ever-evolving domain of assistive technology.
Materials and Methods
Considering the specifics of the research paper, its focus on the history of screen readers, and the assistive technologys evolution, outcome-based information evaluation seems like a suitable method for the research. It serves to estimate to what degree historical advances in combination with technological ones resulted in the creation of screen readers and their current state of development. An array of research papers and scholarly articles from different periods is the principal material basis for the paper, as it allows for the construction of a historical account based on evidence from each temporal stage.
Results
TTS and Speech Synthesis as the Precursors of Screen Readers
With the rapid growth and expansion of digital technologies over the second half of the last century, computing devices have seemingly reached almost all spheres of human existence. Being able to access digital information may even be one of the primary criteria for life quality and the right of every citizen. Nevertheless, modern technologies, as an average user knows them, are unable to satisfy the needs of a large number of society members, even in developed countries (Evans & Blenkhorn, 2003). The struggle for accessibility and the origins of screen readers are traced back to the pre-PC era to the elaboration of Text-to-Speech (TTS) technologies, which were actively pursued even before the Second World War (Ademi & Ademi, 2018). Nowadays, screen readers became one of the most utilized subtypes of assistive technologies.
Text-to-speech rendering is one of the primary methods that set in motion the research that finally made screen readers possible. Although screen readers differ significantly from TTS, the principles of speech synthesis act as a cornerstone for the history of the technology in question (Edwards, 1989). Ademi and Ademi (2018) state that the TTS process is based on the artificial production of speech, which first attempts are documented in the eighteenth century. Nevertheless, the focus on the use of TTS in screen reading technologies to enhance accessibility was formed during the last century.
Speech synthesis is another technology that provided a foundation for the creation of screen readers. Christian Kratzenstein produced the first mechanical speech synthesizer in 1779 the machines performance was limited to five long vowels (Nguyen et al., 2018, p. 349). This result was significantly improved over time in other models. The introduction of Voice Operating Demonstrator (VODER) in 1937 by the Bell Telephone Laboratory a manual electronic machine created by Homer Dudley was the next step towards screen readers (Nguyen et al., 2018). It should be noted that at that stage, the voice quality and the level of intelligibility were rather low, and the usage of the instrument was overcomplicated it even had a pedal to modulate the speech (Nguyen et al., 2018). These two devices and the in-between ones demonstrated the ability of technology to generate a voices semblance and inspired the future of accessibility and assistive technologies.
The history of the speech synthesizing devices is not limited to the borders of the United States, although a significant part of inventions was made here. Nguyen et al. (2018) note that apart from synthesizing English, an array of different, predominantly European languages were involved in the process. For instance, software that could synthesize Italian named MUSA was introduced in 1975 (Nguyen et al., 2018, p. 350). The interest in other languages and the progress made abroad pushed the Bell Telephone Laboratory to produce the first, not monolingual TTS mechanism in 1997, based on their research on multilingual synthesis (Nguyen et al., 2018, p. 351). In this way, the history of screen readers and the advance in the area extends over several countries, incorporating research and technologies directed at various languages.
The Creation of First Screen Readers
The discussed above mechanisms serve as the basis for the first screen readers that are founded on the principles of transforming text into speech. Ademi and Ademi (2018) describe their usage in this way, using keys combinations, and the user can move through the user interface and read all the texts available on the screen. The user can use the keypad to enter the text that is transformed into speech by the screen reader and is read aloud (p. 1334). Jim Thatcher is the inventor of IBM Screen Reader (1986), which is considered to be the first one and was destined for the DOS operating system (Ademi & Ademi, 2018). It was followed by IBM Screen Reader 2, which, unlike the first IBM screen reader, had a graphic interface (Mynatt, 1997). The two applications, despite their limitations, gained particular popularity inside the IBM company.
Initially, the software was designed for personnel with vision impediments at IBM and primarily was not commercial. The users could control IBM Screen Reader 2 with numeric keys this fact made it the first one of its kind (Ademi & Ademi, 2018). IBM and IMB 2 became the pioneers in the area of assistive technologies that focuses on rendering textual content in speech. Their expansion was prompted further by the growth of activism, primarily concentrated on the rights to access information (Scotch, 1989). IBM and IMB 2 screen readers were a vast improvement compared to the speech synthesizers produced by the Bell Telephone Laboratory in matters of speech quality and intelligibility.
By the end of the nineties, a variety of technologies designed to help visually impaired users emerged. Frank Audio data produced screen readers that used a modified keyboard to determine the part of the text to be spoken with a couple of sliders, with one moving horizontally and the other vertically (Edwards, 1989). Vert is an example of screen reader adaptation that was obtainable in several versions they differed in quality of the performance and price (Edwards, 1989). The soundtrack, developed by Edwards (1989), represents an attempt to develop a word processor with an auditory interface and had two versions. According to the creator, the initial evaluation of the product revealed that it was difficult to navigate since the users had trouble recalling the arrangement of the internal elements in the windows (Edwards, 1989). Nevertheless, Soundtrack was declared usable, and Soundtrack 2 came with several improvements (Edwards, 1989). The screen readers produced by Frank Audio data, Vert, and Soundtrack facilitated the lives of visually impaired users of the late twentieth century and laid the ground for their more advanced counterparts.
Societal Context of the Era
The disability rights movement generated suitable conditions for the expansion and commercialization of assistive software and applications. The creation of the first screen reader in the middle of the eighties followed the passage of the 1973 Rehabilitation Act (Scotch, 1989). The act was supposed to protect disabled groups from discrimination encountered in programs and employment practices provided by the federal government or with its assistance. Section 508 centered on the right to access governmental sites and their usability for the people with impairments the act established essential guidelines for accessibility (Olalere & Lazar, 2011). Olalere and Lazar (2011) emphasize that in U.S. law, the notion of accessibility is still defined by Section 508. Even though it does not concern private organizations, Section 508 motivates companies that are interested in cooperating with the federal government to enhance their web accessibility.
Recent Developments
Section 508 had a noticeable impact on the sphere of digital accessibility it served as an incentive for its market to grow and encouraged attempts to screen readers improvement. For instance, BrookesTalk is a browser instrument that structures the content on a page, and thus creates a coherent digital environment for visually impaired users (Lazar et al., 2007). JAWS is a screen reader designed by Freedom Scientific, a company that centers around digital accessibility products (Lazar et al., 2007). The instrument allows its users to access a page in a non-sequential way by incorporating commands that enumerate available frames (Leporini & Paterno, 2004). VoiceOver has recently become another major player in the assistive technology market, following JAWS. On the other hand, NVDA is distinguished from the enlisted screen readers by its free access. Window-Eyes is another prominent screen reader developed by GW Micro both Window-Eyes and JAWS work based on Microsoft Windows and attract the majority of visually impaired users (Lazar et al., 2007). In this way, establishing legislation concerning digital accessibility seems to be one of the most salient stimuli for the wider variety in the screen readers market today.
Discussion
The assistive technology uplift that happened in the eighties is a result of years of activism and policy enforcement. The IBM company improved screen readers and made them ready for broader usage and shifted towards their commercial manufacturing. Despite the accomplished progress, digital accessibility is a sphere that needs extra efforts and attention from Web developers. For instance, Section 508 compliance remains a problem almost fifty years later. The research conducted by Olalere and Lazar (2011) shows that, on average, 2.27 accessibility guidelines are violated per the home page of a federal website (p. 307). Web accessibility is even equalized to the notion of usability when people with different disabilities are concerned it embodies the idea of inclusion and equal opportunities (Freire & Paiva, 2007). A memorandum on Transparency and Open Government that took place in 2009 requires openness and governmental information availability to all citizens (Olalere & Lazar, 2011). Nevertheless, it is still violated by the disregard for guidelines provided by Section 508.
Recommendations
Compliance with the Rehabilitation Act of 1973 is one of the major steps that could help in digital accessibility promotion. Requiring each federal site to have an accessibility statement is one of the ways to promote it (Olalere & Lazar, 2011). The inclusion of information about the issue into public discourse, for instance, by dedicating research to the matters concerning digital technologies promoting accessibility, is another method to bring attention to the problems that need to be solved. Universal usability is still far from being reached, as it takes visually impaired users longer to accomplish tasks on the Web (Lazar et al., 2007). This signals a lack of understanding from the Web developers of the problems that a part of users encounters daily. Web accessibility should be incorporated into general Web education the process in which screen readers could play a role, providing not visually impaired users with an experience that some face daily.
References
Ademi, L., & Ademi, V. (2018). Visually impaired students education through intelligent technologies. Knowledge International Journal, 28(3), 11331138.
Edwards, A. (1989). Soundtrack: An auditory interface for blind users. Human-Computer Interaction, 4(1), 4566.
Evans, G., & Blenkhorn, P. (2003). Architectures of assistive software applications for Windowsbased computers. Journal of Network and Computer Applications, 26(2), 213228.
Freire, A., & Paiva, D. (2007). Using screen readers to reinforce web accessibility education. ACM SIGCSE Bulletin, 39, 8286.
Lazar, J., Allen, A., Kleinman, J., & Malarkey, C. (2007). What frustrates screen reader users on the web: A study of 100 blind users. International Journal of Human-Computer Interaction, 22(3), 247269.
Leporini, B., & Paterno, F. (2004). Increasing usability when interacting through screen readers. Universal Access in the Information Society, 3(1), 5770.
Mynatt, E. D. (1997). Transforming graphical interfaces into auditory interfaces for blind users. HumanComputer Interaction, 12, 745.
Nguyen, T. V, Nguyen, B. Q., Phan, K. H., & Do, H. Do. (2018). Development of Vietnamese speech synthesis system using deep neural networks. Journal of Computer Science and Cybernetic, 34(4), 349363.
Olalere, A., & Lazar, J. (2011). Accessibility of U.S. federal government home pages: Section 508 compliance and site accessibility statements. Government Information Quarterly, 28(3), 303309.