Internet of things (IoT) as a network of devices that are connected to each other by means of the Internet significantly affects the life of a modern person (Rowland, Goodman, Charlier, Light, & Lui, 2015). In a sense, its emergence has changed the peoples attitude towards searching, gathering, and perceiving information. Due to the fact that urban planning ultimately targets the experiences of urban dwellers, the internet of things and is vital to consider for a city designer.
Internet of Things in a Work of an Urban Planning Specialist
Practical applications of IoT are are plentiful. Sharing data, accessing them from multiple devices and the speed with which it can be done nowadays practically shapes the modern persons understanding of comfort (Rowland et al., 2015). That is what modern people became used to. Since urban planners main task is to design the environment in a way that maximizes each persons comfort without limiting the comfort of a society as a whole, the utilization of IoT in city solutions should be a new priority. As it was mentioned above, IoT changed the way people perceive information. Urban planner should adapt to those changes by designing the environment where information is present in way that can be easily accessible to people in a way they are accustomed to access it.
The simplest example of such design solutions is interactive city maps that are often located in transportation hubs. IoT has taught people to create, share, and otherwise interact with information rather than be passive observers. This fact partly explains why it can be hard for modern people to read the traditional maps and why they often resort to electronic ones on their devices. Having the knowledge of that, an urban planner has to acquire additional skills and build knowledge on modern solutions that strengthen the potential of people of acquiring, using, and sharing the information. Such skills encompass, for instance, increased attention to energy planning. Public places such as surface and underground transport stops given the increased need for charging devices can provide citizens with free USB charging devices powered by solar panels, such as they do in North Carolina (Davis, 2017).
Previously, there was no need for such public services as there was no demand for that and people that lived in the age of traditional computing did not feel the urge to stay online all the time. Now, items like Wi-Fi access in public places or even whole-city coverage solutions are added to the present agenda of an urban planner. Creation of a continuous space with internet access is becoming rather a necessity than just a handy addition. Therefore, the places we design seem to demand more emphasis on functionality and usability than on pure aesthetics.
Given all the above, my career path seems to be needing adjustments in terms of technical knowledge such as basics of electrical engineering that will enable me to understand how and where telecommunications are best placed. Additionally, as a professional I will need to stay alert for possible developments in popular mobile technologies to consider the possibilities of their use in my planning and design activities.
Conclusion
All in all, IoT has brought a major change to peoples attitudes and behaviors in cities. In modern times, mobile devices seem to be a dominating channel for acquiring information. These changes are to be processed and addressed by urban planning specialists in order to tailor the urban environment according to their needs. Given that, the knowledge of electrical engineering will become a valuable addition to my background.
References
Davis, C. (2017). NC state adds solar charging station at bus stop. Web.
Rowland, C., Goodman, E., Charlier, M., Light, A., & Lui, A. (2015). Designing connected products: UX for the consumer Internet of things. Sebastopol, CA: OReilly Media, Inc.
For researchers, it is crucial to examine the reliability and relevance of resources that are used in research work. Today, internet is the most consulted source of information. Even though it is used by many researchers, it can be the most unreliable source of information since there are no restrictions given to those who post information.
There are various ways that a researcher can turn to while trying to evaluate the sources. First, a researcher should always strive to use sources that have an author. The information contained in the sources may be true, but it is hard to validate information, whose author testimonials are not known. When the author is identified, the researcher may try to establish his or her educational background and employment details.
Secondly, a researcher should find out whether the information contained in the source matches with his or her research topic. This is done through examining the title, headings, table of contents, and other descriptions that have been given for the source. If the information, it contains, contradicts with the existing knowledge you should establish whether it can be verified. The nature of the publication is also identified whether it is a scholarly or a popular and also whether the research methodology is illustrated. At this point, the researcher should look into the sources, whether they have been cited inside the text or not. All reliable sources should contain in text citations. There are also details that show author biasness in the paper, this includes whether the author do away with some relevant information or write with feelings.
Thirdly, a researcher should examine the source of information. Information can be from a scholarly, accepted, academic or government agency. He or she should check affiliated institutions parent organizations and financing organizations. The URL also gives a clue of the person who created the source. For example, a URL ending with.edu shows a source from an educational institution. If it ends with.org, then it is most likely to be an organization and if it ends with.gov, it is most likely to be a government organization.
The fourth aspect that can be used in determining the reliability and relevance of a source is finding out the main reason why the source was created. Reflect on whether the main function of the source was to enlighten, convince, entertain or promote. The intended audience should also be established. At this point, a researcher should ask oneself whether the audience of the piece was scholars, public, professionals or learners. To be able to identify the source with ease it is essential to study the purpose statements for the journals. The date when the source was created also plays a significant part in determining the trustworthiness of the source.
Relevance and reliability plays a crucial role in determining the strength of ones research work. It is most likely that research work that contains relevant and valid ideas is accepted in the community, as opposed to one that lacks this information. It is essential that a researcher deals with various weakness that can hinder the ability of the research work attaining its intended purpose. The research work that passes all relevance and reliability procedures is strong, while one that fails to accomplish one or all the measures is weak.
Regardless of traffic and direction, Internet resources must comply with the accessibility principles established by WCAG 2.1 guidelines. However, it is of great interest to attempt to correlate these principles with alternative forms of data provision: banners,.pdf,.pptx, or.docx files. It is essential to assess how widespread off-websites availability is, which is the purpose of this work.
Billboard
For advertising products hung on the streets, the convenience of perception of information is critical. For this reason, designers often use vibrant colours that can attract attention. However, as WCAG 2.1 postulates1, these colours need to be matched in contrast. If to address poster Samsung2, Figure 1, it is possible to find out two base colours: blue background and white text. The contrast ratio calculator3 determines that between two colours, the ratio is 2.04:1, which is absolutely inadmissible under conditions 1.4.6, Level AAA. In order to make it easier to view ads for people with visual impairment, designers should change the background colour, for example, to RGB (4:91:128).
Poster
Meanwhile, WCAG 2.1 gives other relative picture recommendations: for example, 1.1.1, Level A. As it can be seen in Figure 24, there is a mans face and a small signature at the bottom of the poster. Although this form is impressive and makes us feel certain emotions, for people with impaired vision or cognitive impairment, it is necessary to introduce a text alternative. For instance, designers could add text elements to the poster from below, as shown in Figure 3.
PDF
Fiction and science literature often uses column format when the text is presented in two parallel lines on the page. This is always difficult because there is no single standard regarding the reading sequence. Figure 45 illustrates this misunderstanding: the user may be wrong in which order to read the information after the word In. This is not following the principle stated in 1.3.2, Level A. However, one can either add read more on this page or change the sequence of text to a single column to address this shortcoming.
DOCX
Most often, if a website offers access to a.docx text file, the document is usually downloaded to the users computer. Moreover, while such documents can often be customized to fit one own needs, the manufacturer must take care of readability. Figure 56 shows that the downloaded.docx cannot be changed because this action is locked. 1.4.8, Level AAA, indicates that a 1.5 line spacing is required, but the document only provides a single one. Fixing this situation does not seem difficult: it is enough for the manufacturer to change the interval.
Labels
In stores designers need to use WCAG 2.1 guidelines, as all customers have different levels of health opportunities. Cases, where the manufacturer is responsible for placing the customer in a discriminatory position, should be excluded, as can be interpreted in Figure 67. Although IKEA is known for its unique product naming, according to 3.1.3 and 3.1.6, Levels AAA, the mechanisms of reading should be presented for compound words. For instance, for a word shown in Figure 6, designers could add a transcription, as shown in Figure 7.
References
Jason in Hollywood, Unbox Your Phone Samsung Galaxy S8 Billboards [Website], Web.
In the future, things are expected to become active participants in business, information, and social processes, where they can interact and communicate with each other, sharing information about the environment, reacting, and influencing the processes taking place in the world without human intervention. The Internet of Things (IoT) is inherently a continuous flow of data in space, passing through various networks. The purpose of this work is to prepare the summary of the article devoted to discussing the phenomenon of the IoT.
The article by Kenneth Li-Minn and Kah Phooi Seng
The multi-page work by Kenneth Li-Minn and Kah Phooi Seng includes three sections that are differently suited to the question of determining the IoT. Authors introduce the reader to reasonably everyday examples that allow a closer understanding of the concept of the objects described (2). In order not to get lost in the variety of different forms, Kenneth Li-Minn and Kah Phooi Seng divide all known mixture of IoT into eight categories. Kenneth Li-Minn and Kah Phoois draw show other categories, which include medical, scientific, agricultural, and environmental examples (1). The main idea of this section is to demonstrate that the needs of society dictate the creation of electronic devices.
Data storage and security issues
Since IoT is based on methods that meet user needs, much attention must be paid to data storage and security issues. The authors are convinced that this issue should be taken into account when creating a complex system architecture. The models of organization offered by researchers are multilayered, where each level is designed for a specific task (6). Not least in the functionality of such things is the recognition of biometric indicators. Of course, creating sensors for such a mission is both labor- and resource-intensive, which is why authors argue that building IoT should be based on the integration of energy efficiency, multifunctionality, and compactness.
Areas for the evolution of IoT
The last section is devoted to forecasting the future of such devices. The researchers identify four areas that are likely to be the route for the evolution of IoT, and the first is the ecosystem integration between devices. Second, future work will focus on improving energy efficiency and performance. Third, given the trends in machine learning today, future devices will operate on the principles of neural networks and artificial intelligence, anticipating user demands. Finally, more data will be collected, and the more complex the protection protocol must be. Researchers are convincing their readers that the IoT of the future should be inaccessible to hacker attacks regardless of the application.
Work Cited
Ang, Kenneth Li-Minn, and Jasmine Kah Phooi Seng. Application Specific Internet of Things (ASIoTs): Taxonomy, Applications, Use Case and Future Directions. IEEE Access, vol. 7, no. 1, 2019, pp. 56577-56590.
The age of the Internet is affecting traditional print reading, but not for the reasons one might think. It is evident that teenagers and children, who use the web more actively, both read and write a considerable amount. The main explanation is that the given activities are one of the few ways a person can interact with the internet, others being video and voice exchange. However, the format of reading and writing on the web is vastly different from the traditional print versions because online texts come in a shorter form. Therefore, the declining scores of students in regards to reading are the result of a change incapability to focus and concentrate because these mental muscles are being atrophied.
Ones ability to focus on a single task is highly dependent on the type of activity. It is stated that many writers face troubles when they are unable to concentrate on a book without getting distracted after two or three pages (Rich, 2008). In other words, active internet use negatively affects people by robbing them of their ability to focus on a single activity. However, one can present a counterargument to the given statement by claiming that it is not reading that is negatively affected by the web, but rather the style or format of the activity (Carr, 2008). It means that the internet encourages or facilitates the use of shorter texts due to their efficiency, which might also mean that such an approach is more superior to the traditional long pieces.
The age of the net eliminated the need to memorize the majority of items due to their constant availability online. One should understand that human memory is an instrument that is far from perfect because it can operate in a faulty manner (Introduction to psychology, 2015). Therefore, a persons cognition, which is the process of acquiring and using knowledge, is more relevant today (Introduction to psychology, 2015).). In other words, it can be argued that the Internet made peoples lives easier by removing the need for memorization and focus.
Although one can view it as a negative aspect of the web, similar arguments can be made against any modern technology, such as phones, cars, or escalators. For example, cars and elevators reduce the need for physical activity, which is why one might condemn them for being responsible for inactivity-related issues. However, any manifestation of progress leads to the elimination of some challenging aspects of life. A person can compensate for the reduction in activity by going to the gym or exercising at home. Similarly, the internets format for reading and writing might be considered as something negative, but the fact is that it is efficient at delivering the information. In addition, it eliminates the need to memorize and focus for prolonged periods, which is why one might experience a drop in concentration.
In conclusion, the age of the internet might be the main cause of the reduction in reading scores among students, but the problem lies in the fact that the testing methods are outdated. The current format of texts is more concise, whereas traditional prints are long. The measurement approaches need to change without condemning the evident benefits of progress, where a person does not need to concentrate and memorize to learn and think. Therefore, online reading is a mere shift from the regular type of activity.
References
Carr, N. (2008). Is Google making us stupid? The Atlantic. Web.
Introduction to psychology. (2015). Minneapolis, MN: Libraries Publishing.
Rich, M. (2008). Literacy debate: Online, r u really reading? The New York Times. Web.
China has emerged as the technological giants in the current era and China is giving a tough time to major countries of the world by effectively adopting and implementing a low-cost strategy. The policies of the Chinese government are stiff and it is believed that their strategies possess double standards. The government of China is massively criticized on a number of issues like human abuse in Tibet and Darfur. The government of China banned certain websites although the journalists were promised to carry their work (Brennan, 2008).
The internet is experiencing a boom and the entire economy of a country can be transformed through the internet. The Chinese government is posing problems in terms of privacy and secrecy for the people of China in terms of internet access (Beehner, 2008). The economy of China is in its transformational era and the economy has transformed from a planned economy to a market economy. However, the Chinese government is applying barriers for businesses, as their policies are getting stricter day by day. They are ignorant about the fact that through the internet China can fulfill its strategies and they can easily achieve its objectives (Chau, 2008).
The freedom of speech a number of times was questioned in China and a writer was sent to jail when he wrote an article about the Chinese government (Edidin, 2008). People is China are restricted to know what is happening within the government and they are restricted to raise their voices (Kahn, 2005). The government is restricting the internet because they believe that sensitive information would flow out of this region to other regions of the world. However, people are finding ways to remove these barriers (Pan, 2005). The internat affects the economy and governmnet has to remove these bariers in order to progess in this era.
The powerhouse of technology China has won the race of the most cell phone users in the world. The county is rich in technology and has more engineering students and technological companies than USA. The technological giants are ready to face the challenges of the current er and they have the tendency to pass every coutry in terms of technology. The previous center of technology Silicon Valley which was located in California is shifting towards China (Fannin, 2008). The technological change has transformed the educational system in China and the implications of technology are quite visible in the infrastructure.
Global organizations face difficulties in China because of different entry barriers and strict rules and regulations applied by the Chinese government. However, the Chinese government is focusing on the local markets and they are providing a stepping stone to the local vendors in order to experience staggering growth (Popkin & Iyengar, 2007). The impact of technology is tremendous in China and the internet plays its role in the development of China.
Thus, in a nutshell we can say that China is progressing and emerging in the world as a super power. The form of government in China is of communist nature and people are considered to be aethiset in China. Goverrment provide facilities to the people and they want want to excel in the race of technology and militiary upgradation. But, the government of China is becoming a hinderance in the progress individuals because they are putting they undue barriers on the use of internet.
Telnet and FTP (File Transfer Protocol) are two protocols that use TCP as a transport to initiate sessions on remote computers and both programs fall into the server-client model. Both protocols lack encryption, and process passwords and logins over the internet in the form of plain text. (Farrell)
The principles of works of the protocols are similar, where the client machine sends the orders while the server receives them. The main difference is the usage of protocols, where FTP as its abbreviation implies establishes the connection to send files over then networks, while Telnet is used to work on remote terminals configured as servers by running commands from remote machines configured as clients.
Ethereal
Ethereal, the current name is Wireshark, is a program that analyzes packets and network traffic. The program allows looking through all the transient traffic that goes the network adapter in real time, capturing, decoding, and analyzing each packet. Although the program might have gained a popularity trough illegally intercepting valuable information, the program can be used as a diagnostic tool and also in education purposes. For example, Ethereal can be used when analyzing network clients and their behavior with different applications and drivers. Other examples include allocating problems of connectivity loss, graphing traffic patterns, and building statistics. (Combs)
Network Sniffer
Sniffers are programs that analyze traffic. In order to demonstrate the work of a sniffer to identify streams, the example of the program Ethereal will be used. Ethereal recognizes the structure of each protocol, and thus allows disassembling the network packet, showing every protocol field at any level, e.g. in Ethereal, there were currently 759 protocols that could be dissected. (Ethereal: Features) Thus, during browsing or any other internet activity, during which a login and a password were entered, the sniffer will capture all the packets that were sent and received during that session. The sniffer can translate each package according to the protocol used, e.g. HTTP and the stream that carried the login and the password can be revealed.
PBX
The simulation of PBX can be implemented through specific emulating software which emulates the hardware parts of the telephone exchange with software analogues. In this case the work of the PBX will be completely emulated on the computer, with no physical connections present. Additionally, other equipment, such as routers, can be configured to simulate the work of PBX and PSTN. In that regard, the simulation will involve physical connection with only the router simulating a PBX.
VOIP
There are many protocols available for Voice over Internet Protocol (VoIP), the most common of which are the following (Packetizer; Ixia):
The Protocol
Advantage
Disadvantage
H.323
better interoperability with the PSTN
better support for video
excellent interoperability with legacy video systems
no built-in methods for reporting user location
far more complex
SIP
easier to develop and troubleshoot
high compatibility
inability to detect a traveling device
MGCP
cheap local access system
carrier class MGCP/Megaco media servers available today and deployed in the field
The modern world is a highly computerized environment in which everything, or almost everything, can be found with the help of the Internet. Further on, the latter worldwide network allows computer users to enjoy the advantages of the Free Software accessible online. However, history shows that the situation could have developed in a completely different way, if the inventors of the ARPANET, and further the Internet, cooperated not with the publically but privately funded organizations. Many scholars Dravis (2003), Yamamoto (2008, p. 516), etc. believe that the growth of the Open Standards and the Internet as such would be endangered if private interests were involved.
The story itself began already in 1957 when the USSR sent its first artificial satellite Sputnik. This event impacted the wish of U.S. researchers to achieve a goal of equal or even greater importance. The Advanced Research Projects Agency was founded in pursuit of this goal in 1962 (Hauben, 2010). The 1960s were the time when computers, after being considered as mere calculating machines, became means of connecting people that used them. The work of the Advanced Research Projects Agency soon resulted in the creation of the so-called ARPANET, the predecessor of the modern Internet and the first computer network invented by humanity (Dravis, 2003; Hauben, 2010).
Further on, the Internet developed from ARPANET, and scholars like Clark and Lick, as cited by Hauben (2010), call this invention not a technological, but rather a human achievement. At the same time, the challenge of the Internet growth is viewed by the same scholars as already a purely technological issue imposed on the human beings working on the Inet development. The main advantages of the Internet are the open standards of using the TCP/IP Protocols and the Free Software available to any computer user without the need to pay royalties (Song, 2008; wheeler, 2007). Drawing from this, scholars like Hauben (2010) and Wheeler (2007) argue that the creation of the Internet in collaboration with a private company like AT&T would have adverse effects on the Internet growth and the development of its open standards.
Nowadays, the use of TCP/IP Protocols is free, and so is the of the licensed software that can be distributed through the Internet. This is possible due to the fact that the Internet was developed using the public funds, and if private company finance were involved, the situation would be different. First of all, the use of TCP/IP Protocols would be paid as the company owning rights for licensing would definitely make a profit of its sale. Second, the distribution of Free Software would hardly be possible as the company owning the rights for the Internet protocols would censor the content of the latter. Finally, all software users would have to pay royalties to software developers and the companies that have bought the rights for licensing the TCP/IP Protocols through which that software would be distributed.
So, Open Standards practiced in the modern Internet are vital for the growth of the web and for the development of the Free Software principle in it. One cannot ignore the fact that in case if private funding was the basis for Internet development, Open Standards would hardly be possible, as private Internet investors would aim at making a profit and returning the money they invested.
References
Dravis, Paul. Open Source Software. Infodev, 2003. Web.
Hauben, Michael. History of ARPANET. Behind the Net, 2010. Web.
Song, Steve. Open Standards Its Not Just Good for the Internet. Many Possibilities, 2008. Web.
Wheeler, David. Why Open Source Software / Free Software (OSS/FS, FLOSS, or FOSS)? Look at the Numbers! FLOSS, 2007. Web.
Yamamoto, Toki. Estimation of the advanced TCP/IP algorithms for long distance collaboration. Fusion Engineering and Design 83:2 (2008): 516 519. Print.
Frame tagging is the process where packets of data are marked to aid in identification during communication. Identifiers are placed on the headers of the packets and this enables the switches on the network to correctly recognize the packet and be able to foreword it to the correct switch within the network (Flood 56). There is a situation where the packets get lost and it is in this situation that identification is needed in order to correctly identify the node.
VTP: This is a protocol that utilizes information acquired from virtual LANs in its domain and assists in the management of VLANs on a network-wide basis. The role of management includes the renaming of VLANs, addition, and deletion of VLANs. This protocol reduces and eases the task that was carried out by the network administrator. In the past, network administrators had to do this task manually and the result was a messy network that was difficult to update and manage if the network administrator was not in the office.
MPLS
Multiprotocol label switching is a technique that utilizes and employs labels in the identification of switches within a network to enable the transfer of data packets from one node to another. The similarity that this protocol has to a VLAN is that in both there is the identification of packets between the switches within the network to enable correct forwarding of packets to the correct nodes (Flood 102).
MPLS marks data packets with one or more labels that are later switched during the process of label lookup. This type of look-up is much faster than ordinary look-ups in the IP table. This enables the entry points and exit points of the data packets to be identified and hence forwarded correctly.
MPLS IS different from VLANs as it employs label lookup as opposed to A look-up of header identifiers in the IP table which is a technique employed by VLANs.
Frequency separation being used for ADSL services is between 1Mbps for uploads streams and up to 5 Mbps for download streams. ADSL uses Frequency Division multiplexing. Copper lines are used to transmit data at high speeds. This is through a digital subscriber line access multiplexer. When this multiplexing technique is employed, voice frequency signals are separated from data traffic. Traffic from the digital subscriber line is routed between the customers equipment and network provider. Electrical signal frequencies are multiplexed in the copper cable in order to achieve the high data rates that are experienced by subscribers. These frequencies are spaced by some specific amount in order to reduce any chances of interference (Flood, 70).
Comparisons between DSL versions (Flood, 95).
DSL Type Distance Limit
IDSL 18000 Feet
GLite 18000 Feet
HDSL 12000-18000 Feet
SDSL 12000 Feet
VDSL 3000-4500 Feet
Two-wire and four-wire transmission is transmission techniques that involve the use of a copper pair wire as a mode of transmitting signals. It is normal to see copper pairs in places where traditional phones were used. An Analog signal is conducted from one point to another during which it is multiplexed and modulated in order for the signal to reach the other side in the same form as it was transmitted. The received signal is then amplified to restore signal strength. Several techniques of noise cancellation are implemented such as echo cancellation (Flood, 90).
DSL technology is implemented using this mode of transmission as a splitter within DSL equipment can allow the signal to utilize low frequencies that are normally underutilized.
Pair gain is a method of maximizing the number of lines being used by customers by transmitting various signals in one twisted pair. This reduces the number of lines required to add more customers to one network. This method is not good for DSL as it involves copper wire as the mode of transmission and this cannot be used by DSL as the mode of transmission line and secondly, one pair gain line usually inputs 24 voice lines and multiplexes the lines to one, T1 where the voice channels are now transmitted to the exchange. One T1 is approximately 1.5Mbps and this has to be shared across all 24 lines. The max bandwidth that a voice channel can utilize is 56 Kbps
The case of Pair gain in Australia is that after being introduced by major telecommunication companies, DSL technologies could not reach the customers despite promises from the big companies about its implementation. This has forced the customers to either look for other solutions in order to benefit from the Internet. As discussed above, telecommunication companies had first reassured the customers that it was possible to enjoy ADSL services while still using Pair gain technology. Unfortunately, this has led to many frustrated customers and this issue has even been brought to the political arena with the ministers responsible for communication being criticized.
Owners of intellectual property have a duty to ensure that the commercialization of their property does not, in any way, conflict with the law. In the Information Technology sector, programmers should ensure that the programs they develop are in agreement with the provisions of the law and that they do not develop programs that will be used in illegal activities like hacking. Similarly, providers of internet services should ensure that the materials that they post on their websites are in agreement with the law. For instance, owners and operators of blogs, websites and social networks are partly liable for defamatory statements posted on their sites depending on their level of involvement in the creation of the defamatory information. This paper analyses the interconnection between the legal liability of owners of intellectual property and the use of the intellectual property by looking into internet service provision and defamation on the internet.
Cyberlibel
With the increased popularity of internet blogs and social networks, the rate of defamation on the internet has also increased. Defamation generally refers to false and negative representation of another persons character leading to a bad reputation. It takes two main forms: slander and libel. Slander is in form of spoken language while libel is written. The most common form of defamation on the internet is libel. This is because of the written evidence that blogs, websites and social networks provide to their users.
One of the factors that have facilitated the increase of cyberlibel cases is the ever-increasing popularity in the use of computers and the internet. The internet now gives services spanning virtually all realms of life. People use the internet to shop, to find news, to run business transactions, for entertainment etc. This implies that the internet is being accessed by a great number of people with various levels of awareness of the law. The effect of this is that some ignorant people may use the internet illegally by posting defamatory materials on the internet. Such information will be accessed by a lot of people and thus it will have adverse effects. This behavior should be strictly discouraged to ensure that technology is used to achieve the goals it is intended to achieve. In most cyberlibel cases, the defamed person has information on the sites that he/she can use to prove the offence. Once the person proves that he/she has been defamed, he/she is normally entitled to damages with liberality.
Owners of Blogs
Owners and operators of social networks and blogs have a moral and legal responsibility to control the kind of information posted on their sites. To some extent, they make have legal liability for defamation occurring in their sites. This is despite the fact that they normally have immunity from liability arising due to defamatory statements on their sites. The stated immunity applies if the owners of these intellectual properties (sites, blogs and social networks) were not actively involved in the posting of the defamatory statements. More about the immunity is explained in the CDA (Communications Decency Act). Contrarily, if the operators or owners of websites expressly ask for information which ultimately turns out to be defamatory, the aforementioned immunity will not apply (Cram, 2002, p. 3).
In light of the discussed issue, there is a great need for owners of internet blogs and operators of interactive websites to be very careful on their level of involvement in service delivery. This is because users can maliciously or otherwise post defamatory statements on the blogs and websites which may lead to liability on the side of the owner/operator. For instance, a defamatory profile created in a social networking site may lead to liability to owners or operators depending on the level of involvement of the operator in the creation of the profile. That is, if the operator/owner expressly asks for specific information which is prone to defamation, the aforementioned immunity will not apply and thus the operator/owner will be liable for the defamatory statements (Cram, 2002, p. 5).
Providers of internet services are also subject to a different type of liability; notice-based liability. This type of liability is responsible for the less regulative measures put in place by service providers to avoid the creation of defamatory materials on their sites. This is because if service providers struggle to identify defamatory materials and avoid the same on their websites, there will be stronger grounds for defending liability to defamation since there will be constant notices to materials that are likely to be defamatory. Internet service providers therefore have the responsibility of reducing their liability to defamation by striking a proper balance between regulation and precautionary non-regulation (Gomez, 2000, p. 1).
Liability of defamer
The defamer is the most obvious liable person in a defamation case. Just like in other defamation cases, the person who posts things that falsely damage a persons reputation is liable for such materials. However, it is normally difficult to identify the person who posted such materials and if the person is known, other questions may arise like questions regarding jurisdiction.
Although some jurisdictional issues may arise in cases where the defamer does not expect his/her postings to affect different jurisdictions, such cases favor the defamed. Thus a defamer who posts defamatory materials on the internet about a person in a different state will be charged in the victims state. However, some courts may reject personal jurisdictions based on the nature of the case. For instance, a Pennsylvania court declined to hear a case (Betting V Tostigan) about a New York defamer who had defamed a Pennsylvania complainant on a betting website. The court decided that, since the statements were directed at New York, Pennsylvania had no jurisdiction to handle the case.
Just like the providers of internet services, users of internet services are also given some immunity by the CDA (Communication Decency Act). Users who post defamatory materials on websites and blogs are thus immune from defamation suits since it is only the originator of the materials who should be held liable. This fact was enforced by the Supreme Court in California in the Barret v. Rosenthal case where the defendant was offered immunity since she was not the originator of the defamatory materials. This case raised a lot of concerns about the over-protectiveness of the CDA to defamation defendants (Hilden, 2006, p. 1).
Court Cases
An example of a case about defamation on the internet is the stated Barret v. Rosenthal case. The complainants were doctors dealing with frauds in the health sector. They had sued another operator of a different website who had posted materials on a newsgroup which she herself did not operate. The materials did not recognize the doctors as advocates of health ethics and they disparaged the professional competence that the defendants held. The case eventually narrowed to one material that was posted by the defendant on her newsgroup. The message had been received by the defendant from a private source and it had defamed the plaintiffs but the defendant went ahead and posted it. The court decided that the defendant was immune from a defamation case since the defamatory message had a different originator other than the defendant. In this case, the fairness of accusing the originator of the message since the originator was not responsible for the posting of the materials on the newsgroup (Hilden, 2006, p. 1).
Another example of a court case related to the above discussion is Carafano v. Metrosplash.com. In this case, the defendant operated the website matchmaker.com. The website was meant to make single people meet singles of the opposite sex and possibly start dating. This was facilitated by profiles of prospective single persons that were collected using a detailed questionnaire on the site. The plaintiff spotted a false profile about her that was created by an unknown user using the extensive questionnaire and thus she sued the company. It was ruled out that the creation of the detailed questionnaire by the company was an effort to actively participate in information development and thus the defendant had participated in creating the defamatory profile. In addition to that, the court stated that the fact that the defendant operated information provision services and thus the immunity provided by the CDA did not apply. This case evidences the fact that the immunity accorded to service providers and blog operators can be revoked depending on the level of activity of the service providers (Nicolas, 2007, p. 1).
The last example of a case on this subject is the Griffis v. Luban case. It is an example of how jurisdictional issues are solved in defamation cases. The court of appeal in Minnesota decided that the state of Alabama had the right to exercise jurisdiction in the case where a Minnesota resident had defamed an Alabama complainant on the internet. The defamation had targeted the plaintiffs professional profile in a newsgroup on the internet. The accused had posted several defamatory messages with the aforementioned intention to tarnish the professional abilities of the complainant. The Alabama court had awarded damages worth $ 25,000 to the complainant but she sought enforcement in Minnesota. The appellate of Minnesota court held on to the decision arrived at by the Alabaman court since the case was under Alabama jurisdiction. It was further added that since the defendant had the knowledge that the defamation was limited to Alabama and she expected to be sued there, it was within the jurisdiction of the Alabama court to handle this case (Hoffman, 2006, p. 1).
Analysis of the effects of the CDA
The set precedents on the CDA have made courts interpret the immunity to include people who republish defamatory information intentionally. The congress passed this law since the preexistent situation was worse. Before the CDA was enacted, hosts and operators of internet services used to ignore the messages that were posted on the sites that they hosted. They avoided looking at material on which there were complaints and thus they avoided editing the materials which could make them liable. The result was that the internet services they offered to their clients were uncontrolled and all kind of information was posted on them. It can thus be argued that, although the CDA has brought about some controversial issues, it was necessary to pass it since it brought some control to internet service provision (Hoffman, 2006, p. 1).
In passing the CDA, the Congress was intending to put some regulation on the kind of materials that are posted on websites and end the free-for-all (Hilden, 2006, p. 1) era. They thus structured the Act such that operators of websites will be able to handle defamation complaints by ensuring that they read content before deciding on whether to post it on their sites. This was intended to depend on whether the materials are in conformity with the rules o the site (Gomez, 2000, p. 1).
The idea of determining whether a post qualifies for de-posting may sound like some kind of censorship because the person determining whether to de-post is not part of the government. However, the censorship comes as a result of the function of the providers of internet services to edit material before posting it on their sites. The Congress thus wanted to phase out the online public forums in which people used to share all kinds of ideas. The power given to internet service providers to edit material before posting it on their site can be seen as an effort by the Congress to regulate the kind of information shared on the internet while at the same time promoting freedom of speech. In a nutshell, the CDA was a big step towards regulation of online information sharing and thus the congress should consider making the necessary amendments to this Act in order to close the loopholes it has left for internet defamers (Nicolas, 2007, p. 1).
Conclusion
The biggest challenge in the enforcement of defamation on the internet is the fact that it is usually difficult to prove to the authorities that the defendant is responsible for the postings. Once this challenge is overcome, the case can be easily solved and jurisdictional issues can be settled. Attorneys who have dealt with a number of Cyberlaw cases can predict the results of such a case with reasonable precision. On the other hand, if the defendant misses an attorney who can find evidence in such a case, the case will most probably fail due to lack of evidence.
With the ever increasing popularity in the use of technology and the World Wide Web, there have been several instances of misuse of these technologies. One of the ways in which technology is being misused is the discussed habitual posting of defamatory materials on the internet by users. Virtually all parties involved in provision of internet services are privy to ensuring that such materials are not posted on websites and blogs. Owners of such blogs should ensure that they are not actively involved in getting information which can possibly be defamatory from users since this way, they will be held liable for any defamatory materials posted on such sites. On the other hand, users have a moral duty to avoid the posting of malicious and defamatory statements.
Reference List
Cram, A. (2002). Injurious falsehood and defamation on the internet. Web.
Gomez, E. (2000). Defamation on the internet. Web.
Hilden, J. (2006). Defamation on the internet. Web.
Hoffman, I. (2006). Defamation on the internet. Web.
Nicolas, D. (2007). Defamation and Slander on the Internet. Web.