PGP Encryption System as a Good Idea

Introduction

Privacy and internet security is a subject of great interest in todays world of globalization. The information we send or receive need to be in a most secured manner. Encryption is a form of coding the data into a form called a ciphertext which cannot be easily understood by unauthorized people. In order to decode the encryption, a decryption system is required that will be known only by the authorized receiver. Decryption is the method of converting encrypted data back into its original form that can be readable by the end-user. The system of encryption and decryption is gaining great significance in the world of wireless communications.

This is mainly because the wireless communication medium can be easily tapped and misused. Therefore, it is vital to use encryption and decryption systems especially to protect privacy. In general, it can be said that the stronger the ciphertext the better is the security (Bauchle, et al. 2009). This paper discusses the Pretty Good Privacy (PGP) encryption system in general and why it is good for both individual as well as organizational use.

Pretty Good Privacy (PGP)

Pretty Good Privacy is in general abbreviated as PGP and is a computer program that helps secure online transitions. Specifically, PGP provides cryptographic privacy and authentication. It is often used for signing, encrypting, and decrypting e-mails. Today, this has become one of the most reliable privacy systems that are easy and effective methods. PGP encryption works by making use of public-key cryptography which is in general linked with the public keys to a user ID or an e-mail address.

The popularity of PGP is increasing among individuals, organizations, and business for several reasons such as its confidentiality, authentication, integrity, double encryption, etc. It is essential to ensure that only the intended receiver reads the message and for this purpose, the message is encrypted using the receivers public key that can be only encoded by the receiver using his private key. Hence, confidentiality is high while using PGP.

PGP a Good idea for Individuals

There are several reasons that PGP is highly recommended for individual use. It is a foremost contributor of privacy solutions that avert the risk of unauthorized access to the digital property by defensive it at the source. Additionally, PGP gives individuals full protection tools to protect themselves against privacy breaches. It is understood that it is only with the use of PGPs technology that individuals make the assessment concerning the information that needs to be released about them, and also it gives the individual all rights to determine what information to be released. In fact, the latest versions of the PGP cookie.cutter help individuals to securely surf the Web, without any worry that information about them can be tracked by unauthorized users (ftc.gov, N.D.).

PGP has also helped to form a web of trust. For instance, if one user knows another users certificate is valid, then he can sign the other users certificate. Therefore, the group of individuals having trust in the first user will automatically trust the second user certificate. One of the greatest advantages of PGP is that it allows an infinite number of users to sign each certificate. In other words, a network or web of trust is developed as more and more users vouch for each others certificates (networkcomputing.com, 2009).

PGP a Good idea for Organizations

Several organizations use the PGP system for the safe transfer and storage of information. In addition to shielding or protecting data in transfer over a network, PGP encryption system is also effectively used to protect data storage for a longer period of time.

This is of great significance when it comes to organizations because it is possible to store private data in a most secured manner. The latest version of PGP is much more beneficial for the organizations as it has added additional encryption algorithms. In other words the degree of their cryptographic vulnerability differs with the algorithm used and in most of the cases the algorithms used in recent years is not publicly known to have cryptanalytic weaknesses (Wikipedia, 2009). Hence, it is always recommended to use a good encryption system such as PGP to ensure organizational privacy.

PGP provided the opportunity for authentication. The sender can encrypt the memo by means of the senders public key and also his own private key. This is known as authentication. On the other end the receiver of the memo then decrypts it using the senders public key and his own private key. Given that the receiver has to use the senders public key to decrypt the massage, only the sender could have encrypted it using his private key. PGP also uses digital signatures to integrate the memo. The digital signatures help to further authenticate the memo.

The security of PGP encryption system is comparatively high because of the utilization of double encryption. It is through a combination of both symmetric and asymmetric encryption that PGP ensures high security and also high speed. It is of great significance to the organizations because PGP allows the sender to send a single encrypted message to various recipients. At the same time it does not require to re-encrypt the entire data. However, if the sender were using a conventional asymmetric encryption system, then it would be even more difficult task to re-encrypt the whole memo for each recipient individually. This is not only time consuming but is also a tedious job.

Conclusion

PGP encryption is comparatively cheaper both for individual as well as organizational use (Alchaar, et al. N.D.). In fact, the commercial licence draws payment of a small amount. Since it is easy to uses for local file encryption, secure disk volumes and network connections, organizations find much more usefulness of PGP. PGP is a good option for both individual and organizational use as it is highly impossible to crack the double encryption of PGP. It is essential to understand the need for strong encryption as it is the only means to boost up the security and prevent crime. A strong encryption does not allow the easy tracking and therefore prevents the misuse of personal details.

References

  1. Alchaar, H., Jones,J., Kohli, V. and Wilkinson, K. (N.D.) Encryption and PGP. Web.
  2. Bauchle, R., Hazen, F., Lund, J., Oakley, G. and Rundatz, F. (2009) Web.
  3. ftc. (N.D.) Consumer Privacy 1997  Request To Participate, P954807. Web.
  4. networkcomputing. (2009) PGP Grows Up. Web.
  5. Wikipedia (2009) Pretty Good Privacy. Web.

Defeating WI-FI Protected Access Encryption With Graphics Processing Units

Outline

In the field of computing, the past 20 years or so have been marked by an increase in the capability of the devices and a massive increase in the use of computers connected via networks to carry out business and related tasks. The processing power of desktop computers has increased almost 100 times. The processing power of an average desktop computer today is in the range that was only dreamed about in the 80s. Alongside this development, people began to use computers to do much more than word processing, preparation of spreadsheets, and simple database tasks that characterized the MS-DOS era. Computer networks link banks, schools, hospitals, government agencies, and people, making work much easier to accomplish. As populations become more reliant on these devices and networks, crime has also begun to emerge due to vulnerabilities. In this paper the discussions presented will introduce a concept in modern computer networks namely Wi-Fi. The discussions will briefly highlight how this concept is implemented and focus on the threat caused by increased insecurity caused by high-powered Graphics processing Units.

The Introduction of Wireless Technology

Networks and networking are commonly used terms in the field of computing. This term often refers to a connection of various computers and devices through the use of communication channels. Networks are important because they increased efficiency by allowing users to share resources. For example, in an office, it is common to see a single printer used to serve many computers or workstations. This is made possible via a network that relies on wired or wireless technology to provide the services of printing to the various computers at the same time. In the absence of this network, each computer would have to be attached to a separate printer thus increasing operating costs.

The advent of the internet saw a vast increase in the use of the internet. The internet is a global network of computer networks that brings together governments, learning institutions, commercial and other agencies together, thus allowing a large pool of easily accessible resources to millions of people all over the world. As more and more people began to use the internet to meet various daily needs the computer industry was under a lot of pressure to improve the quality of networks. The gradual process of improvement led to the type of networks that this paper will focus on, namely, wireless networks or Wi-Fi.

As earlier stated, a computer network provides a communication backbone through which various computers and peripheral devices can be shared. As the name suggests a wireless network provides users with the advantage that connections from one point to the next, do not require cables. The wireless network is thus much easier to set up and the lack of wires reduces maintenance costs. These networks make use of remote information transmission through electromagnetic waves such as radio waves. In recent years the telecommunication industry has also grown and a new and popular type of wireless network exists in the domain of cellular networks which can transmit voice and data over-improved channels. These wireless networks have become very popular across the developed world and it is not uncommon to find these hotspots in coffee bars, airports, colleges, train and bus stations, etc. They offer people great flexibility but may be capable of putting the unsuspecting would-be users in harms way. It is primarily for this reason that entrepreneurs interested in using this technology for their business need to be aware of the security risks that such networks imply. For example, within an unobstructed space, a wireless network can travel as far as 500 meters, including up heating or elevator shafts (Williams, 2006). It is difficult to ensure that the signals will not travel further than the business space they are meant to cover. Initially, the networks relied upon the Wired Equivalent Privacy (WEP) standard to provide security to the data that was being transmitted to deter interception. WEP in its basic form made use of 40-bit static keys and RC4 encryption to provide the security equivalent to that provided on a wired network. The fact that wireless networks do not need an access point to access data made this approach slightly inefficient. An improved approach was then developed, namely, Wi-Fi Protected Access (WPA), that utilizes an 8 bit MIC that ensures no tampering with data being transmitted (Williams, 2006).

In this paper, we will discuss an emerging technique that compromises the wireless network through the use of Graphics Processing Units (GPU). These new Visual Graphics Adapters have in place several general-purpose processors as opposed to special-purpose hardware units that characterized their predecessors (Mariziale, Richard III & Roussev, 2007). It is in light of such threats to wireless networks that this paper seeks to demonstrate the possible risks underlying the use of wireless networks for commercial purposes.

Wireless Weaknesses in WEP

The Wired Equivalent Privacy standard or WEP is utilized in the IEEE 802.11 protocol and is known to possess serious security flaws that thus make the network vulnerable to malicious attacks and intrusion. This poses cause for concern given that wireless devices are proliferating rapidly and it is expected that they will soon surpass the volume of traditional wired clients. The main driver behind the proliferation lies in the need for businesses to cut costs and improve the delivery of service. Currently, wireless networks bring together devices ranging from embedded microdevices to larger general-purpose PCs. The price of networking has reduced and the speeds available have increased; people are increasingly dependent on these networks to perform works and other routine tasks e.g. bill payments, making reservations, etc (Kocak & Jagetia, 2008).

However, the security of the data and privacy of Wi-Fi networks remains questionable. This seeks to bring to light that almost any unauthorized user with know-how can access, modify or use the data being transmitted over a Wi-Fi network. It is, therefore, no surprise that as these networks grow and people begin to store and share more important information, hackers have begun to prey on unsuspecting users. Such instances have led to an increase in research into the security of these wireless networks in recent times. It is important to note that WEP is harder to implement on microdevices that possess low processing power and memory capacity (Kocak & Jagetia, 2008).

As earlier mentioned WEP operates in compliance with the IEEE 802.11 standard for wireless networks. This standard forms the basic over-the-air interface that is used between a wireless client and a base station or even two or more wireless clients. The standard became operational to unify protocols of operation and promote interoperability between devices manufactured by different companies. The standard is characterized by a high data rate and simple encryption technique which made it very popular. One of its major shortcomings is it mainly addresses the physical layer which is mainly concerned with easing the process of transmission between devices. The security of the data and access controls are poorly handled thus leaving a major loophole for would-be attackers. The WEP protocol has been found to have serious flaws owing to the easily broken cryptography techniques utilized in the process of data transmission (Kocak & Jagetia, 2008).

Since WEP is intended to provide the same security as that available on a wired network it utilizes a shared key authentication technique to identify stations and clients. In a wired network, this key is never transmitted in the open but in the wireless network there is no entry point and the key is virtually in the open. To facilitate shared key authentication, the network will convey both the challenge and the encrypted challenge over the media (airwaves). With both these in hand, it is possible to make attempts and find the pseudo-random number that is used to create the key/IV pair. In WEP the same key will be used in encoding and decoding a message and therefore once the key/IV pair that was used for the exchange has been computed the message is no longer secure from prying eyes. This fact is best illustrated through the use of software that can be used to passively monitor the encryption key and make attempts at deciphering this key once enough packets of data have been gathered. Some available product versions of such software accomplish the deciphering of the RC4 algorithm in as little as 15 minutes depending on the volume of data on the network. On networks with higher volume, the task is accomplished faster; it requires 1GB of data to decipher the algorithm (Computer Security & Fraud, 2001).

Attacks against WEP: Types Used (Theoretical and Technical Description)

From the details provided in the section above it is clear to see that WEP can be easily compromised and hence more stringent security is required to secure a wireless network. The attacks that can be made to a WEP network can be classified as either direct or passive. In the case of direct attacks, the attacker modifies the contents of the data being transmitted over the network. This happens because any data packet traveling along these networks contains a short 24-bit key used for identification. With a key, this small, repetition is bound to occur within fairly short intervals thus creating an opportunity to grab a key and use it to intercept data. In the case of passive attacks, the attacker violates the integrity of the network by sniffing. Sniffing is a process that involves analyzing the keys being used to identify the repeated keys and begin the process of redirecting the information to the attacker. Another passive approach involves the use of tables to decrypt all the data being transmitted on a network. Both these modes of attack rely on the amount of traffic on the network. Therefore, the heavier the traffic the quicker these attacks are accomplished. The WEP security is very vulnerable and will most likely not accomplish its goals if the attacker is well informed on its weaknesses. This fact has been proven by the numerous tools that have been developed to crack into such networks (Kocak & Jagetia, 2008).

The Migration to WPA and WPA2 Encryption

The failures of WEP have not gone unnoticed and the result has been two additional security alternatives namely WPA and WPA2. Wi-Fi Protected Access or WPA was developed as a short-term solution to the problems that arose from the use of WEP. WPA was designed specifically for compatibility with hardware that was capable of supporting WEP. Unlike WEP which was developed in compliance with IEEE 802.11 standards, WPA does not fall under any ratified IEEE standard. The WPA protocol provides an improved key management scheme known as the Temporal Key Integrity Protocol (TKIP). This protocol was a great improvement from WEP although the implementation required some upgrading of the access points. This ceased to be an issue after 2003 when most client and access point hardware incorporated the technology into their products. The algorithm used in the encryption of data is similar to WEP but the length of the key has been increased to 48 bits (Rowan, 2010). The large size of this number makes it difficult to cause a collision of data packets. In addition, the protocol has a second data layer that protects against packet replay. This removes the introducing packets and triggers key collision as is commonly practiced by hackers in WEP. In WPA if the algorithm in use detects packets with a similar key within sixty seconds of each other it shuts down the network for sixty seconds. WPA in practice supports operations either in Pre Shared Key mode or Extensible Authentication Protocol. In Pre Shared Key Mode both sides communicating need to know the key which can be sixty-four hexadecimal units or a password within the range of eight to sixty-three characters. If a weak Pre Shared Key is chosen WPA is prone to brute force attacks using lookup tables and increased processing power to speed up the cracking process. The Extensible Authentication Protocol improves the identification of clients but is out of reach for most users who do not want to spend significant sums of money buying the required equipment (Rowan, 2010). These flaws resulted in improvements and brought about WPA2 which fully complies with the IEEE 802.11i standard. Under WPA2 the solution to TKIP appeared to be fully secure but most manufacturers are yet to incorporate the required software upgrades (Rowan, 2010). It may be argued that WPA2 should be enforced even if it requires compromising the compatibility of devices because it offers the best security.

Attacks against WPA using brute force with VGA GPU Power

As is the case with all new developments, over time, vulnerabilities are discovered and a secure environment becomes insecure owing to this knowledge. In the case of WPA which was once considered the answer to security issues in Wi-Fi networks, the vulnerable point is in the encryption which can be broken through the use of powerful Graphics Processing Units (GPUs). Before this era in computing the GPUs only processed graphics content. However, due to the large increase in the capability of these devices manufacturers considered means to use the power for other nongraphic applications (Mariziale, Richard III & Roussev, 2007). Take the case of NVIDIA 8800 GTX which theoretically can perform 350 GFLOPS and cost a buyer $570 in 2007. ON the other hand, an Intel 3.0 GHz dual-core processor could only handle 40GFLOPS, and yet it cost $266. This translates to approx. $1/GFLOP for the 8800 GTX and approx. $7/GFLOP for the duo core processor, making the GPU much cheaper when the cost is compared with performance (Mariziale, Richard III & Roussev, 2007). Another advantage of the GPU lies in the large memory bandwidth which far exceeds that of the regular processor, 86Gbs to 6Gbs. This in itself is more than enough reason to want to maximize the potential of the GPU.

To enable one to harness the power of such a GPU the software has to be developed using one of the few API that is capable of interacting with the hardware. In the case of graphics programs, it may be worth considering utilizing OpenGL or Direct3D (Mariziale, Richard III & Roussev, 2007). However, for tasks such as breaking WPA, the software includes general-purpose languages such a C for Graphics or Cg. These are high-level languages based on C and also contain features that make them suitable for GPU programming. In the experiment for this case, the CUDA (Compute Unified Device Architecture) SDK was used to program the 8800 GTX GPU. The 8800 GTX operates on a principle of Single Instruction Multiple Data, which is possible using the set of stream processors that are built into the hardware. Once an instruction is issued in the kernel each processor runs a set of threads on its stream processors. The result is there are n processors available to complete a task; where n = the no. of multiprocessors X the no. of stream processors within a multiprocessor. Taking the case of the 8800GTX it has 16 multiprocessors and each multiprocessor has 8 stream processors, thus bringing a total of 128 processors (Mariziale, Richard III & Roussev, 2007). It is this huge increase in processing capability that is referred to when brute force is used to break the WPA keys.

Having discussed briefly the power of the GPU, some information on CUDA SDK should be useful in understanding the procedure of code-breaking in WPA. CUDA programs are prepared in C or C++, with specific extensions, and are compiled using a unique (nvcc) compiler in Windows or Linux (Mariziale, Richard III & Roussev, 2007). The CUDA program executes in two separate components namely, host and GPU. The Host component issues instruction on what operations to perform, while the GPU component creates the threads and rapidly completes the instruction. In addition to this, CUDA provides functions for memory management, controlling the GPU, support for OpenGL and Direct3D, and texture handling. The CUDA program alongside the GPU provides a single cost-effective boost to the processing power of the computer system.

The approach also has its limitations which include maximizing the use of shared memory, limiting access to global memory, and preventing serialization of threads running on the GPU. Depending on the application running these are limitations that are bearable when weighed against the results obtained and time saved. With such increases in power one may wonder why the GPUs have not yet come of age and replaced the regular processors for general-purpose computing. Several reasons lie behind this; for instance, floating-point numbers are generally non IEEE compliant and until fairly recently that standard offered no support for integer arithmetic (Mariziale, Richard III & Roussev, 2007). The huge increased power results require the use of floating-point numbers making their implementation in general-purpose computing using integer arithmetic difficult. Another problem lies in the fact that GPUs are largely parallel by nature and at each branching operation the GPU incurs an additional cost on resources. As the threads diverge the GPU begins execution in a serial manner that defeats their intended purpose (Mariziale, Richard III & Roussev, 2007). It suggests algorithms need to be developed to ensure a more parallel mode of operation. This should not be taken to mean the GPUs are inefficient but rather, the GPU is best used to handle processor-intensive tasks such as code-breaking leaving the processor free to handle other tasks. If the GPU were to operate as the main processor as the threading increases eventually the tasks of lower priority would end up locked out until the executing process terminates. Another shortcoming lies in the fact that the APIs used for programming for GPUs are still not yet very suitable for general-purpose programming. This is because they were specifically designed to handle coding for graphic applications and are ill-suited for other purposes (Mariziale, Richard III & Roussev, 2007). The GPU technology in various graphic cards proves that the power of these devices can be enhanced to improve the computer system performance. This case of their use in breaking the keys used in wireless internet bears witness to that and provides future developers with useful insight on the way forward for network security.

Conclusion

In this paper, the discussion presented has revolved around Wi-Fi technology and the issues surrounding the security of such networks. The internet which is in practice a global network has greatly added value to the lives of millions of people all over the world and continues to grow. For example, an individual interested in education today will have access to institutions all over the world and will be able to tap into the knowledge he or she desired even without traveling. Through the use of social networking sites such as Facebook and Twitter people all over the world can interact and share ideas and experiences. An individual interested in buying and selling stocks on Wall Street can be just as successful today whether they are in a remote village in Sudan or living in Manhattan. Its contributions to humanity as yet can not quite be gauged but as with any innovation, it has raised new issues as well.

The security issues highlighted within the paper are proof of the vulnerability the users of this great breakthrough are exposed to regularly. It is for this reason that fast and conclusive action should be taken to lock down the loopholes that exist within the networks that are so useful and serve so many purposes. Anyone with knowledge of the vulnerabilities within such a system must make effort to guard against the possibility of any hazard that may emanate from using the network for any purpose. It is also encouraging to note that the hardware manufacturers involved in production are constantly improving the devices they offer to improve performance and reduce operating costs. Even though our systems are vulnerable such action reflects the great and bright future ahead.

References

Computer Fraud & Security. AirSnort Tool Cracks WEP in 15 Minutes. Computer Fraud & Security, 2001, 5.

Hunton, P. (2009). A Growing Phenomenon of Crime and the Internet: A Cybercrime Execution and Analysis. Computer Law & Security Review, 25, 528-535.

Kocak, T., & Jagetia, M. (2008). A WEP Post Processing Algorithm for a Robust 802.11 WLAN Implementation. Computer Communications, 31, 3405-3409.

Mariziale, L., Richard III, G. G., & Roussev, V. (2007). Massive Threading: Using GPUs to Increase Performance of Digital Forensic tools. Digital Investigation, 4, 73-81.

Rowan, T. Negotiating Wi-Fi Security, Network Security, 2010, 8-12.

Williams, P. Cappuccino, Muffin, Wi-Fi  But What About the Security? Network Security, 2006(10), 13-17.

Encryption as a Key Technological Solution to Corporate Security

Introduction

Technological advancements exhibited in the communication industry calls for stringent measures to ensure security of information transmitted in the distribution channels. Such information is protected by transforming the messages from their original readable text to more complicated form known as ciphertext which requires special knowledge to access. This encryption technique ensures confidentiality of information as only the transmitter and the recipient have access to the secret key needed to decrypt the message (Breton, 1999). Encryption has been successfully employed by many governments as well as militaries in enhancing the secrecy of their communication. It is currently utilized in many civilian systems to protect data both in transit as well as stored information. Data stored in computers or other storage devices such as flash discs can be protected against leakages through encryption. Encryption of data on transit is also necessary in protecting such information from interception during communication through telephone, internet among other communication systems (Breton, 1999).

Applications of encryption

Pretty good privacy (PGP)

This is one of the encryption applications developed in early nineties by Zimmerman to enhance cryptographic security of the information transmitted. PGP is a cryptosystem encompassing both the public key as well as conventional cryptography and is meant to compress information transmitted when plaintext is encrypted with the PGP. As a result, both space and time are effectively utilized (PGP, 2004). Encryption of plaintext with PGP enhances resistance to cryptanalysis since compression eliminates patterns in the plaintext which are always exploited by such techniques in cracking the cipher. Subsequently, PGP creates a secret key which encrypts the plaintext into ciphertext aided by a fast and secure conventional encryption algorithm. This key is encrypted to the public key of the recipient and transmitted to the recipient along with the cipher text (PGP, 2004).

In decryption, session key is recovered by private key using the recipients copy of PGP. This key is thus used to decrypt the cipher text thereby making it readable (PGP, 2004).

Smart credit card

Smart card has an in-built microprocessor necessary for verification process. Anyone using the card has to ascertain his identity any time a transaction is made. The card and the reader execute a chain of encrypted signs to confirm that both the parties are genuine as far as transaction is concerned. Such transaction is performed in encrypted form to enhance security of the information (Breton, 1999). As a result, chances of parties defrauding the system are minimized. Such cards are currently used in many businesses in U.S as well as Europe.

Personal Identification number (PIN)

This is a coded identification number that is inserted into the automatic teller machine together with the bank card to ascertain the legitimacy of the bearer before carrying out a transaction. The PIN is stored in an encrypted form on the ATM card or in the computers in the bank. Given the PIN and the banks keys, it is possible to compute the cipher but not the reverse since such transformation is a one way cryptography. This system ensures protection of information against leakages or even interception by adversaries (Breton, 1999).

Secure Electronic Transaction (SET)

This is a procedure developed by Visa and MasterCard that utilizes public-key system to enhance security of the payment transaction in a business. This protocol restores data integrity in addition to its confidentiality. Moreover, it also verifies the authenticity of cardholder as well as the merchant. Leakage of information as a result of use dual signatures is highly unlikely in this protocol (Segev, Porra, & Roldan, 1998).

Implications of encryption on organizations

Various corporate businesses as well as private ventures depended on pure information in the late 20th century as a result of transition witnessed in the communication industry. This entailed better access to affordable communications as well as capability of such ventures in obtaining, storing, and distributing infinite amount of information. Instances such as e-banking, personal computers, e-commerce and internet use are some of the developments of the revolution that influenced every aspect of business activities in the aforementioned era (Segev, Porra, & Roldan, 1998). Cryptology has been fundamental in the protection of information during communication especially in the above mentioned instances. It is therefore noteworthy that cryptology extends beyond provision of secrecy to encompass protection of information integrity against interception by adversaries. In e-commerce for instance, the transactions between the customer and the merchant are protected through encryption so as to restore confidentiality of the information. Moreover, the merchant is assured of full payment as the information concerning transactions is protected and the customer can not claim otherwise (Segev, Porra, & Roldan, 1998).

As stated before, the science of encryption has been helpful not only in ensuring secrecy and confidentiality of the information but also in restoring integrity of any transaction across corporate networks. Besides, encryption also helps in verifying the authenticity of messages in a communication. According to PGP (2004) conventional encryption is both fast as well as convenient in the protection of stored data.

However, products formed from encryption may not be perfect as far as protection of the integrity, secrecy, as well as authenticity of messages is concerned. Additional techniques are needed to ensure the protection of authenticity and integrity of messages (Breton, 1999). At the outset, encryption of e-mails has to be accompanied by digital signatures at the point of their formation so as to ensure confidentiality of the information. Without such signatures, the sender has the right to argue that information was tampered with before encryption but after it had left their computer. Additionally, sending e-mails from outside the organization network by mobile users may not be practical when using encryption product. The utilization of encryption technique in protecting information may be challenging especially when a mistake is done while executing or designing the system. In such circumstances, unencrypted information may be accessed by adversaries even without decryption hence paving way for successful attacks. Moreover, poor handling of cipher keys also pose risks as far as protection of data is concerned. Such errors may enable adversaries get access to vital information on the communication (Breton, 1999).

There has to be trust developed between the sender and the recipient of the encrypted message so as to ensure the secrecy of the key thereby protecting it from interception by any adversary. If anyone intercepts the messages in a communication, s/he can forge or modify the information thereby exposing vital transaction information that may be used to sabotage the operations of the organization.

Evolution of old and current encryption practices

Originally, cryptography entailed concealing of information and subsequent revelation of such information to the legitimate users through utilization of a secret key. This involved the transformation of information from plaintext to cipher text via encryption and decryption respectively which ensured security of such data. Encryption technique only ensured the confidentiality of written messages during world war (Segev, Porra, & Roldan, 1998). However, similar principles have been found to auger well with the modern technologies. Encryption currently encompasses the protection of information stored in computers as well as those flowing between such electronic equipments (Segev, Porra, & Roldan, 1998).

Besides, signals from fax machines as well as TVs are also encrypted in addition to verification of participants identity in the e-commerce. When incorporated with other techniques such as digital signatures, encryption technique not only ensures confidentiality of messages but also the integrity as well as authenticity of the information in communication across networks. Generally, the revolution of encryption as a technology in protection of information is attributed to changes in information technology, e-commerce as well as internet use. Public key cryptography provides for secure exchange of information between individuals who have no prior security arrangements. It limits the sharing of private keys unlike public keys. This improves the security of information as anyone having the public key can only encrypt the message but not decrypt it (Segev, Porra, & Roldan, 1998).

Conclusion

Encryption has been an important technique in ensuring the confidentiality of information in a communication. This technique transforms information from its original form known as plaintext to ciphertext which requires special key to access. The encrypted information can not therefore be accessed by anyone else except the transmitter and the recipient who have the secret key. Consequently, the information is protected from interception. Developments in information, e-commerce as well as internet have made it necessary for the protection of data both on transit as well as stored information in the computers. Encryption technique is therefore vital for organizations as it enhances the security of information across networks. However, this technique may not be successful enough in securing information and therefore requires other techniques to restore the integrity as well as authenticity of messages in a communication (Segev, Porra, & Roldan, 1998).

Reference List

Brenton, C. (1999). Authentication and Encryption. Sybex, Inc. Web.

PGP. (2004). An Introduction to Cryptography. Web.

Segev, A., Porra, J., & Roldan, M. (1998). Communications of the ACM, 41(10), 81-87. Web.

Encryption Techniques for Protecting Big Data

Introduction

Encryption helps encode information into specific symbols that protect data from viruses and hacks. This technique is commonly used in the IT and engineering spheres when professionals need to secure big data and transfer it to the customer. There are two types of encoding which are called symmetric and asymmetric, and they are used in diverse cases like data integrity, authentication, privacy (Kaur & Kumar, 2020). Data protection is considered one of the most important aspects while coding. Consequently, this essay will compare two types of encryption techniques and evaluate their importance in protecting big data.

Encryption Techniques and Their Importance

Symmetrical encryptions have special algorithms which encrypt and decrypt information. There is only one key called a secret key, and it converts data into code that is not understandable for anyone. Once the encoded message is received by those who need it, parallel algorithms transform the code into the original form. The information can be accessed using a specific password available for those who work with the code. This password is generated by the systems, including random numbers and letters. Every sphere that requires information encryption and password generation has unique algorithms not available for those who do not work in a particular area. According to Kaur and Kumar (2020), there are two types of symmetrical encryption called block algorithms and stream algorithms. Blocks transform information into chunks of electronic data, and streams store data in a structured sequence.

Asymmetrical encryption has approximately the same functions as symmetrical techniques, using two separate cryptographic keys. These keys are commonly known as public key and private key, and they have diverse responsibilities (Kaur & Kumar, 2020). The public key is often used for encryption, and the private key is famous for the decryption process, and this type of information can be accessed only with the use of a specific password. Both keys are mathematically connected, and their parallel generation helps secure the encryption process.

Both techniques are used to secure data and ensure a safe transfer of information. The keys used in the process allow developers and customers to have protected access to specific data that other people should not reveal. Asymmetric and symmetric techniques are usually used by banks where all transactions should secure and transferred in the particular code to decrease the risk of stolen money or hacked accounts. Software developers can decide which type of encryption process they like most and what can be more helpful. For more easy projects symmetric technique can be used as it requires less time to be generated. At the same time, more complicated schemes require professionals to use asymmetrical methods.

Nevertheless, it is important to understand the most visible differences between these two types of encryption methods. For instance, the text is shorter in the symmetrical method, and the generation of codes does not take much time. However, asymmetrical encryption requires developers to encode more information and make it more complicated. Moreover, the asymmetrical type is more secure than symmetrical and used in serious areas. The symmetrical technique is easy to use because the encryption and decryption are conducted using one specific key, while asymmetrical requires developers to combine private and public keys. Nonetheless, symmetrical encryption is an old version of encoding, and asymmetrical, in this case, is considered to be more reliable.

Symmetrical encryption is more suitable for big data as it does not require long messages, and long messages can be encoded quicker. Big data includes many complicated information pieces that constantly renew and should be protected. Consequently, the symmetrical technique uses an easy method that allows developers to preserve data efficiently. The asymmetrical technique is rarely used in massive encryption processes as it increases the length of the cryptogram, which is not beneficial in the encoding process. The price, in this case, rises significantly, and the demand for this method decreases. Moreover, the symmetrical method uses less power and may provide encryption and decryption simultaneously. The hybrid technique can also be useful in data protection, but it isnt very easy, and software developers should have experience in combining different techniques. Nevertheless, the hybrid method helps control encryption during the session, and when it finishes, the developer can save changes or leave them unsaved. Session key is generated for the single access and is not saved in the system. These methods help ensure data security and test different changes that might affect big data.

Conclusion

In conclusion, encryption helps to protect different types of data, and diverse techniques give developers the chance to test or adjust changes to specific information. Symmetrical and asymmetrical encryption techniques are the key aspect of the concept, and they allow IT professionals to work with different amounts of data. By understanding the key similarities and differences, encoders are able to use both techniques in the right way. Each method has advantages and disadvantages, and if people understand how to decrease the negative effects caused by these techniques, the data encryption process can become more reliable and secure.

Reference

Kaur, M., and Kumar, V. (2020). A comprehensive review on image encryption techniques. Achieves of Computational Methods in Engineering, 27, 15-43.

Strong Encryption as a Social Convention

Introduction

People can be divided according to different issues. In regards to strong encryption dependence, people are divided in those, who encrypt, and those, who do not encrypt. Nowadays, a number of discussions and misunderstandings take place around the term strong encryption and its importance for society. Many American states and several European countries support the idea of banning strong encryption. However, there are also such countries like Netherlands that are eager to give many facts in favor of strong encryption. The point is that not all people are aware of this term and its characteristics. Therefore, before defining strong encryption as a moral, legal, or even economic issue, it is necessary to comprehend it as a matter of social convention and make sure that all people know what social encryption means and how it may influence a human life.

General facts about strong encryption

It is easy to find a definition of strong encryption online and get an idea of what it means. However, even if the definition is memorized, not many people are able to comprehend its essence, worth, and possible impact on society. Strong encryption is a type of communication that can be protected and hidden against any kind of cryptographic analysis and available and readable to an intended group of people only. People try to invent and develop new programs and methods on how to recode information in such messages. However, strong encryption aims at creating a powerful protection. Much time and effort are necessary to decrypt the required portion of information. As a rule, not all attempts to decrypt information are successful.

Strong encryption and morality

The governments of many countries and the representatives of several American states admit that strong encryption is a serious threat to people and their security. In the light of the terroristic attacks in Paris and California, the Obama administration starts making several attempts to ban encrypted communication (Peterson par. 10). Politicians and military representatives want to change the conditions under which people may communicate and use the idea of safety measurements to support their positions. However, it is necessary to remember that the Internet belongs to people around the whole globe, and the governments do not have the rights to control it or define the conditions under which people may use it (Peterson par. 2). Many organizations try to raise the importance of moral issues in banning strong encryption.

Strong encryption as a matter of social convention

Still, it is necessary to understand that not all people know enough about strong encryption and the abilities people can get with it. Many people continue using the Internet as they did it several years ago and enjoy the possibilities they get. To make strong encryption an ethical issue, it is necessary to introduce strong encryption as a matter of social convention first. People should know as much as possible about the positive and negative aspects of strong encryption and make their independent decisions about the importance of its support or banning. American is one of the countries that have been promoting the idea of personal freedoms for a long period of time. As soon as the government of the country sees the threat of freedom, it tries to ban it in a short period. Such example should bother society and make people think about other opportunities that can be banned by the government as soon as they are identified as a threat.

Conclusion

In general, society should understand that the intentions to gain control over everything are useless. It is not an attempt to save people. It is just the way to make people think about other more dangerous ways to overcome the law and achieve the goals set.

References

Peterson, Andrea. Debate over Encryption Isnt just Happening in the US. NZ Herald (2016). Web.

PGP Encryption System as a Good Idea

Introduction

Privacy and internet security is a subject of great interest in todays world of globalization. The information we send or receive need to be in a most secured manner. Encryption is a form of coding the data into a form called a ciphertext which cannot be easily understood by unauthorized people. In order to decode the encryption, a decryption system is required that will be known only by the authorized receiver. Decryption is the method of converting encrypted data back into its original form that can be readable by the end-user. The system of encryption and decryption is gaining great significance in the world of wireless communications.

This is mainly because the wireless communication medium can be easily tapped and misused. Therefore, it is vital to use encryption and decryption systems especially to protect privacy. In general, it can be said that the stronger the ciphertext the better is the security (Bauchle, et al. 2009). This paper discusses the Pretty Good Privacy (PGP) encryption system in general and why it is good for both individual as well as organizational use.

Pretty Good Privacy (PGP)

Pretty Good Privacy is in general abbreviated as PGP and is a computer program that helps secure online transitions. Specifically, PGP provides cryptographic privacy and authentication. It is often used for signing, encrypting, and decrypting e-mails. Today, this has become one of the most reliable privacy systems that are easy and effective methods. PGP encryption works by making use of public-key cryptography which is in general linked with the public keys to a user ID or an e-mail address.

The popularity of PGP is increasing among individuals, organizations, and business for several reasons such as its confidentiality, authentication, integrity, double encryption, etc. It is essential to ensure that only the intended receiver reads the message and for this purpose, the message is encrypted using the receivers public key that can be only encoded by the receiver using his private key. Hence, confidentiality is high while using PGP.

PGP a Good idea for Individuals

There are several reasons that PGP is highly recommended for individual use. It is a foremost contributor of privacy solutions that avert the risk of unauthorized access to the digital property by defensive it at the source. Additionally, PGP gives individuals full protection tools to protect themselves against privacy breaches. It is understood that it is only with the use of PGPs technology that individuals make the assessment concerning the information that needs to be released about them, and also it gives the individual all rights to determine what information to be released. In fact, the latest versions of the PGP cookie.cutter help individuals to securely surf the Web, without any worry that information about them can be tracked by unauthorized users (ftc.gov, N.D.).

PGP has also helped to form a web of trust. For instance, if one user knows another users certificate is valid, then he can sign the other users certificate. Therefore, the group of individuals having trust in the first user will automatically trust the second user certificate. One of the greatest advantages of PGP is that it allows an infinite number of users to sign each certificate. In other words, a network or web of trust is developed as more and more users vouch for each others certificates (networkcomputing.com, 2009).

PGP a Good idea for Organizations

Several organizations use the PGP system for the safe transfer and storage of information. In addition to shielding or protecting data in transfer over a network, PGP encryption system is also effectively used to protect data storage for a longer period of time.

This is of great significance when it comes to organizations because it is possible to store private data in a most secured manner. The latest version of PGP is much more beneficial for the organizations as it has added additional encryption algorithms. In other words the degree of their cryptographic vulnerability differs with the algorithm used and in most of the cases the algorithms used in recent years is not publicly known to have cryptanalytic weaknesses (Wikipedia, 2009). Hence, it is always recommended to use a good encryption system such as PGP to ensure organizational privacy.

PGP provided the opportunity for authentication. The sender can encrypt the memo by means of the senders public key and also his own private key. This is known as authentication. On the other end the receiver of the memo then decrypts it using the senders public key and his own private key. Given that the receiver has to use the senders public key to decrypt the massage, only the sender could have encrypted it using his private key. PGP also uses digital signatures to integrate the memo. The digital signatures help to further authenticate the memo.

The security of PGP encryption system is comparatively high because of the utilization of double encryption. It is through a combination of both symmetric and asymmetric encryption that PGP ensures high security and also high speed. It is of great significance to the organizations because PGP allows the sender to send a single encrypted message to various recipients. At the same time it does not require to re-encrypt the entire data. However, if the sender were using a conventional asymmetric encryption system, then it would be even more difficult task to re-encrypt the whole memo for each recipient individually. This is not only time consuming but is also a tedious job.

Conclusion

PGP encryption is comparatively cheaper both for individual as well as organizational use (Alchaar, et al. N.D.). In fact, the commercial licence draws payment of a small amount. Since it is easy to uses for local file encryption, secure disk volumes and network connections, organizations find much more usefulness of PGP. PGP is a good option for both individual and organizational use as it is highly impossible to crack the double encryption of PGP. It is essential to understand the need for strong encryption as it is the only means to boost up the security and prevent crime. A strong encryption does not allow the easy tracking and therefore prevents the misuse of personal details.

References

  1. Alchaar, H., Jones,J., Kohli, V. and Wilkinson, K. (N.D.) Encryption and PGP. Web.
  2. Bauchle, R., Hazen, F., Lund, J., Oakley, G. and Rundatz, F. (2009) Web.
  3. ftc. (N.D.) Consumer Privacy 1997  Request To Participate, P954807. Web.
  4. networkcomputing. (2009) PGP Grows Up. Web.
  5. Wikipedia (2009) Pretty Good Privacy. Web.

Defeating WI-FI Protected Access Encryption With Graphics Processing Units

Outline

In the field of computing, the past 20 years or so have been marked by an increase in the capability of the devices and a massive increase in the use of computers connected via networks to carry out business and related tasks. The processing power of desktop computers has increased almost 100 times. The processing power of an average desktop computer today is in the range that was only dreamed about in the 80s. Alongside this development, people began to use computers to do much more than word processing, preparation of spreadsheets, and simple database tasks that characterized the MS-DOS era. Computer networks link banks, schools, hospitals, government agencies, and people, making work much easier to accomplish. As populations become more reliant on these devices and networks, crime has also begun to emerge due to vulnerabilities. In this paper the discussions presented will introduce a concept in modern computer networks namely Wi-Fi. The discussions will briefly highlight how this concept is implemented and focus on the threat caused by increased insecurity caused by high-powered Graphics processing Units.

The Introduction of Wireless Technology

Networks and networking are commonly used terms in the field of computing. This term often refers to a connection of various computers and devices through the use of communication channels. Networks are important because they increased efficiency by allowing users to share resources. For example, in an office, it is common to see a single printer used to serve many computers or workstations. This is made possible via a network that relies on wired or wireless technology to provide the services of printing to the various computers at the same time. In the absence of this network, each computer would have to be attached to a separate printer thus increasing operating costs.

The advent of the internet saw a vast increase in the use of the internet. The internet is a global network of computer networks that brings together governments, learning institutions, commercial and other agencies together, thus allowing a large pool of easily accessible resources to millions of people all over the world. As more and more people began to use the internet to meet various daily needs the computer industry was under a lot of pressure to improve the quality of networks. The gradual process of improvement led to the type of networks that this paper will focus on, namely, wireless networks or Wi-Fi.

As earlier stated, a computer network provides a communication backbone through which various computers and peripheral devices can be shared. As the name suggests a wireless network provides users with the advantage that connections from one point to the next, do not require cables. The wireless network is thus much easier to set up and the lack of wires reduces maintenance costs. These networks make use of remote information transmission through electromagnetic waves such as radio waves. In recent years the telecommunication industry has also grown and a new and popular type of wireless network exists in the domain of cellular networks which can transmit voice and data over-improved channels. These wireless networks have become very popular across the developed world and it is not uncommon to find these hotspots in coffee bars, airports, colleges, train and bus stations, etc. They offer people great flexibility but may be capable of putting the unsuspecting would-be users in harms way. It is primarily for this reason that entrepreneurs interested in using this technology for their business need to be aware of the security risks that such networks imply. For example, within an unobstructed space, a wireless network can travel as far as 500 meters, including up heating or elevator shafts (Williams, 2006). It is difficult to ensure that the signals will not travel further than the business space they are meant to cover. Initially, the networks relied upon the Wired Equivalent Privacy (WEP) standard to provide security to the data that was being transmitted to deter interception. WEP in its basic form made use of 40-bit static keys and RC4 encryption to provide the security equivalent to that provided on a wired network. The fact that wireless networks do not need an access point to access data made this approach slightly inefficient. An improved approach was then developed, namely, Wi-Fi Protected Access (WPA), that utilizes an 8 bit MIC that ensures no tampering with data being transmitted (Williams, 2006).

In this paper, we will discuss an emerging technique that compromises the wireless network through the use of Graphics Processing Units (GPU). These new Visual Graphics Adapters have in place several general-purpose processors as opposed to special-purpose hardware units that characterized their predecessors (Mariziale, Richard III & Roussev, 2007). It is in light of such threats to wireless networks that this paper seeks to demonstrate the possible risks underlying the use of wireless networks for commercial purposes.

Wireless Weaknesses in WEP

The Wired Equivalent Privacy standard or WEP is utilized in the IEEE 802.11 protocol and is known to possess serious security flaws that thus make the network vulnerable to malicious attacks and intrusion. This poses cause for concern given that wireless devices are proliferating rapidly and it is expected that they will soon surpass the volume of traditional wired clients. The main driver behind the proliferation lies in the need for businesses to cut costs and improve the delivery of service. Currently, wireless networks bring together devices ranging from embedded microdevices to larger general-purpose PCs. The price of networking has reduced and the speeds available have increased; people are increasingly dependent on these networks to perform works and other routine tasks e.g. bill payments, making reservations, etc (Kocak & Jagetia, 2008).

However, the security of the data and privacy of Wi-Fi networks remains questionable. This seeks to bring to light that almost any unauthorized user with know-how can access, modify or use the data being transmitted over a Wi-Fi network. It is, therefore, no surprise that as these networks grow and people begin to store and share more important information, hackers have begun to prey on unsuspecting users. Such instances have led to an increase in research into the security of these wireless networks in recent times. It is important to note that WEP is harder to implement on microdevices that possess low processing power and memory capacity (Kocak & Jagetia, 2008).

As earlier mentioned WEP operates in compliance with the IEEE 802.11 standard for wireless networks. This standard forms the basic over-the-air interface that is used between a wireless client and a base station or even two or more wireless clients. The standard became operational to unify protocols of operation and promote interoperability between devices manufactured by different companies. The standard is characterized by a high data rate and simple encryption technique which made it very popular. One of its major shortcomings is it mainly addresses the physical layer which is mainly concerned with easing the process of transmission between devices. The security of the data and access controls are poorly handled thus leaving a major loophole for would-be attackers. The WEP protocol has been found to have serious flaws owing to the easily broken cryptography techniques utilized in the process of data transmission (Kocak & Jagetia, 2008).

Since WEP is intended to provide the same security as that available on a wired network it utilizes a shared key authentication technique to identify stations and clients. In a wired network, this key is never transmitted in the open but in the wireless network there is no entry point and the key is virtually in the open. To facilitate shared key authentication, the network will convey both the challenge and the encrypted challenge over the media (airwaves). With both these in hand, it is possible to make attempts and find the pseudo-random number that is used to create the key/IV pair. In WEP the same key will be used in encoding and decoding a message and therefore once the key/IV pair that was used for the exchange has been computed the message is no longer secure from prying eyes. This fact is best illustrated through the use of software that can be used to passively monitor the encryption key and make attempts at deciphering this key once enough packets of data have been gathered. Some available product versions of such software accomplish the deciphering of the RC4 algorithm in as little as 15 minutes depending on the volume of data on the network. On networks with higher volume, the task is accomplished faster; it requires 1GB of data to decipher the algorithm (Computer Security & Fraud, 2001).

Attacks against WEP: Types Used (Theoretical and Technical Description)

From the details provided in the section above it is clear to see that WEP can be easily compromised and hence more stringent security is required to secure a wireless network. The attacks that can be made to a WEP network can be classified as either direct or passive. In the case of direct attacks, the attacker modifies the contents of the data being transmitted over the network. This happens because any data packet traveling along these networks contains a short 24-bit key used for identification. With a key, this small, repetition is bound to occur within fairly short intervals thus creating an opportunity to grab a key and use it to intercept data. In the case of passive attacks, the attacker violates the integrity of the network by sniffing. Sniffing is a process that involves analyzing the keys being used to identify the repeated keys and begin the process of redirecting the information to the attacker. Another passive approach involves the use of tables to decrypt all the data being transmitted on a network. Both these modes of attack rely on the amount of traffic on the network. Therefore, the heavier the traffic the quicker these attacks are accomplished. The WEP security is very vulnerable and will most likely not accomplish its goals if the attacker is well informed on its weaknesses. This fact has been proven by the numerous tools that have been developed to crack into such networks (Kocak & Jagetia, 2008).

The Migration to WPA and WPA2 Encryption

The failures of WEP have not gone unnoticed and the result has been two additional security alternatives namely WPA and WPA2. Wi-Fi Protected Access or WPA was developed as a short-term solution to the problems that arose from the use of WEP. WPA was designed specifically for compatibility with hardware that was capable of supporting WEP. Unlike WEP which was developed in compliance with IEEE 802.11 standards, WPA does not fall under any ratified IEEE standard. The WPA protocol provides an improved key management scheme known as the Temporal Key Integrity Protocol (TKIP). This protocol was a great improvement from WEP although the implementation required some upgrading of the access points. This ceased to be an issue after 2003 when most client and access point hardware incorporated the technology into their products. The algorithm used in the encryption of data is similar to WEP but the length of the key has been increased to 48 bits (Rowan, 2010). The large size of this number makes it difficult to cause a collision of data packets. In addition, the protocol has a second data layer that protects against packet replay. This removes the introducing packets and triggers key collision as is commonly practiced by hackers in WEP. In WPA if the algorithm in use detects packets with a similar key within sixty seconds of each other it shuts down the network for sixty seconds. WPA in practice supports operations either in Pre Shared Key mode or Extensible Authentication Protocol. In Pre Shared Key Mode both sides communicating need to know the key which can be sixty-four hexadecimal units or a password within the range of eight to sixty-three characters. If a weak Pre Shared Key is chosen WPA is prone to brute force attacks using lookup tables and increased processing power to speed up the cracking process. The Extensible Authentication Protocol improves the identification of clients but is out of reach for most users who do not want to spend significant sums of money buying the required equipment (Rowan, 2010). These flaws resulted in improvements and brought about WPA2 which fully complies with the IEEE 802.11i standard. Under WPA2 the solution to TKIP appeared to be fully secure but most manufacturers are yet to incorporate the required software upgrades (Rowan, 2010). It may be argued that WPA2 should be enforced even if it requires compromising the compatibility of devices because it offers the best security.

Attacks against WPA using brute force with VGA GPU Power

As is the case with all new developments, over time, vulnerabilities are discovered and a secure environment becomes insecure owing to this knowledge. In the case of WPA which was once considered the answer to security issues in Wi-Fi networks, the vulnerable point is in the encryption which can be broken through the use of powerful Graphics Processing Units (GPUs). Before this era in computing the GPUs only processed graphics content. However, due to the large increase in the capability of these devices manufacturers considered means to use the power for other nongraphic applications (Mariziale, Richard III & Roussev, 2007). Take the case of NVIDIA 8800 GTX which theoretically can perform 350 GFLOPS and cost a buyer $570 in 2007. ON the other hand, an Intel 3.0 GHz dual-core processor could only handle 40GFLOPS, and yet it cost $266. This translates to approx. $1/GFLOP for the 8800 GTX and approx. $7/GFLOP for the duo core processor, making the GPU much cheaper when the cost is compared with performance (Mariziale, Richard III & Roussev, 2007). Another advantage of the GPU lies in the large memory bandwidth which far exceeds that of the regular processor, 86Gbs to 6Gbs. This in itself is more than enough reason to want to maximize the potential of the GPU.

To enable one to harness the power of such a GPU the software has to be developed using one of the few API that is capable of interacting with the hardware. In the case of graphics programs, it may be worth considering utilizing OpenGL or Direct3D (Mariziale, Richard III & Roussev, 2007). However, for tasks such as breaking WPA, the software includes general-purpose languages such a C for Graphics or Cg. These are high-level languages based on C and also contain features that make them suitable for GPU programming. In the experiment for this case, the CUDA (Compute Unified Device Architecture) SDK was used to program the 8800 GTX GPU. The 8800 GTX operates on a principle of Single Instruction Multiple Data, which is possible using the set of stream processors that are built into the hardware. Once an instruction is issued in the kernel each processor runs a set of threads on its stream processors. The result is there are n processors available to complete a task; where n = the no. of multiprocessors X the no. of stream processors within a multiprocessor. Taking the case of the 8800GTX it has 16 multiprocessors and each multiprocessor has 8 stream processors, thus bringing a total of 128 processors (Mariziale, Richard III & Roussev, 2007). It is this huge increase in processing capability that is referred to when brute force is used to break the WPA keys.

Having discussed briefly the power of the GPU, some information on CUDA SDK should be useful in understanding the procedure of code-breaking in WPA. CUDA programs are prepared in C or C++, with specific extensions, and are compiled using a unique (nvcc) compiler in Windows or Linux (Mariziale, Richard III & Roussev, 2007). The CUDA program executes in two separate components namely, host and GPU. The Host component issues instruction on what operations to perform, while the GPU component creates the threads and rapidly completes the instruction. In addition to this, CUDA provides functions for memory management, controlling the GPU, support for OpenGL and Direct3D, and texture handling. The CUDA program alongside the GPU provides a single cost-effective boost to the processing power of the computer system.

The approach also has its limitations which include maximizing the use of shared memory, limiting access to global memory, and preventing serialization of threads running on the GPU. Depending on the application running these are limitations that are bearable when weighed against the results obtained and time saved. With such increases in power one may wonder why the GPUs have not yet come of age and replaced the regular processors for general-purpose computing. Several reasons lie behind this; for instance, floating-point numbers are generally non IEEE compliant and until fairly recently that standard offered no support for integer arithmetic (Mariziale, Richard III & Roussev, 2007). The huge increased power results require the use of floating-point numbers making their implementation in general-purpose computing using integer arithmetic difficult. Another problem lies in the fact that GPUs are largely parallel by nature and at each branching operation the GPU incurs an additional cost on resources. As the threads diverge the GPU begins execution in a serial manner that defeats their intended purpose (Mariziale, Richard III & Roussev, 2007). It suggests algorithms need to be developed to ensure a more parallel mode of operation. This should not be taken to mean the GPUs are inefficient but rather, the GPU is best used to handle processor-intensive tasks such as code-breaking leaving the processor free to handle other tasks. If the GPU were to operate as the main processor as the threading increases eventually the tasks of lower priority would end up locked out until the executing process terminates. Another shortcoming lies in the fact that the APIs used for programming for GPUs are still not yet very suitable for general-purpose programming. This is because they were specifically designed to handle coding for graphic applications and are ill-suited for other purposes (Mariziale, Richard III & Roussev, 2007). The GPU technology in various graphic cards proves that the power of these devices can be enhanced to improve the computer system performance. This case of their use in breaking the keys used in wireless internet bears witness to that and provides future developers with useful insight on the way forward for network security.

Conclusion

In this paper, the discussion presented has revolved around Wi-Fi technology and the issues surrounding the security of such networks. The internet which is in practice a global network has greatly added value to the lives of millions of people all over the world and continues to grow. For example, an individual interested in education today will have access to institutions all over the world and will be able to tap into the knowledge he or she desired even without traveling. Through the use of social networking sites such as Facebook and Twitter people all over the world can interact and share ideas and experiences. An individual interested in buying and selling stocks on Wall Street can be just as successful today whether they are in a remote village in Sudan or living in Manhattan. Its contributions to humanity as yet can not quite be gauged but as with any innovation, it has raised new issues as well.

The security issues highlighted within the paper are proof of the vulnerability the users of this great breakthrough are exposed to regularly. It is for this reason that fast and conclusive action should be taken to lock down the loopholes that exist within the networks that are so useful and serve so many purposes. Anyone with knowledge of the vulnerabilities within such a system must make effort to guard against the possibility of any hazard that may emanate from using the network for any purpose. It is also encouraging to note that the hardware manufacturers involved in production are constantly improving the devices they offer to improve performance and reduce operating costs. Even though our systems are vulnerable such action reflects the great and bright future ahead.

References

Computer Fraud & Security. AirSnort Tool Cracks WEP in 15 Minutes. Computer Fraud & Security, 2001, 5.

Hunton, P. (2009). A Growing Phenomenon of Crime and the Internet: A Cybercrime Execution and Analysis. Computer Law & Security Review, 25, 528-535.

Kocak, T., & Jagetia, M. (2008). A WEP Post Processing Algorithm for a Robust 802.11 WLAN Implementation. Computer Communications, 31, 3405-3409.

Mariziale, L., Richard III, G. G., & Roussev, V. (2007). Massive Threading: Using GPUs to Increase Performance of Digital Forensic tools. Digital Investigation, 4, 73-81.

Rowan, T. Negotiating Wi-Fi Security, Network Security, 2010, 8-12.

Williams, P. Cappuccino, Muffin, Wi-Fi  But What About the Security? Network Security, 2006(10), 13-17.

Encryption as a Key Technological Solution to Corporate Security

Introduction

Technological advancements exhibited in the communication industry calls for stringent measures to ensure security of information transmitted in the distribution channels. Such information is protected by transforming the messages from their original readable text to more complicated form known as ciphertext which requires special knowledge to access. This encryption technique ensures confidentiality of information as only the transmitter and the recipient have access to the secret key needed to decrypt the message (Breton, 1999). Encryption has been successfully employed by many governments as well as militaries in enhancing the secrecy of their communication. It is currently utilized in many civilian systems to protect data both in transit as well as stored information. Data stored in computers or other storage devices such as flash discs can be protected against leakages through encryption. Encryption of data on transit is also necessary in protecting such information from interception during communication through telephone, internet among other communication systems (Breton, 1999).

Applications of encryption

Pretty good privacy (PGP)

This is one of the encryption applications developed in early nineties by Zimmerman to enhance cryptographic security of the information transmitted. PGP is a cryptosystem encompassing both the public key as well as conventional cryptography and is meant to compress information transmitted when plaintext is encrypted with the PGP. As a result, both space and time are effectively utilized (PGP, 2004). Encryption of plaintext with PGP enhances resistance to cryptanalysis since compression eliminates patterns in the plaintext which are always exploited by such techniques in cracking the cipher. Subsequently, PGP creates a secret key which encrypts the plaintext into ciphertext aided by a fast and secure conventional encryption algorithm. This key is encrypted to the public key of the recipient and transmitted to the recipient along with the cipher text (PGP, 2004).

In decryption, session key is recovered by private key using the recipients copy of PGP. This key is thus used to decrypt the cipher text thereby making it readable (PGP, 2004).

Smart credit card

Smart card has an in-built microprocessor necessary for verification process. Anyone using the card has to ascertain his identity any time a transaction is made. The card and the reader execute a chain of encrypted signs to confirm that both the parties are genuine as far as transaction is concerned. Such transaction is performed in encrypted form to enhance security of the information (Breton, 1999). As a result, chances of parties defrauding the system are minimized. Such cards are currently used in many businesses in U.S as well as Europe.

Personal Identification number (PIN)

This is a coded identification number that is inserted into the automatic teller machine together with the bank card to ascertain the legitimacy of the bearer before carrying out a transaction. The PIN is stored in an encrypted form on the ATM card or in the computers in the bank. Given the PIN and the banks keys, it is possible to compute the cipher but not the reverse since such transformation is a one way cryptography. This system ensures protection of information against leakages or even interception by adversaries (Breton, 1999).

Secure Electronic Transaction (SET)

This is a procedure developed by Visa and MasterCard that utilizes public-key system to enhance security of the payment transaction in a business. This protocol restores data integrity in addition to its confidentiality. Moreover, it also verifies the authenticity of cardholder as well as the merchant. Leakage of information as a result of use dual signatures is highly unlikely in this protocol (Segev, Porra, & Roldan, 1998).

Implications of encryption on organizations

Various corporate businesses as well as private ventures depended on pure information in the late 20th century as a result of transition witnessed in the communication industry. This entailed better access to affordable communications as well as capability of such ventures in obtaining, storing, and distributing infinite amount of information. Instances such as e-banking, personal computers, e-commerce and internet use are some of the developments of the revolution that influenced every aspect of business activities in the aforementioned era (Segev, Porra, & Roldan, 1998). Cryptology has been fundamental in the protection of information during communication especially in the above mentioned instances. It is therefore noteworthy that cryptology extends beyond provision of secrecy to encompass protection of information integrity against interception by adversaries. In e-commerce for instance, the transactions between the customer and the merchant are protected through encryption so as to restore confidentiality of the information. Moreover, the merchant is assured of full payment as the information concerning transactions is protected and the customer can not claim otherwise (Segev, Porra, & Roldan, 1998).

As stated before, the science of encryption has been helpful not only in ensuring secrecy and confidentiality of the information but also in restoring integrity of any transaction across corporate networks. Besides, encryption also helps in verifying the authenticity of messages in a communication. According to PGP (2004) conventional encryption is both fast as well as convenient in the protection of stored data.

However, products formed from encryption may not be perfect as far as protection of the integrity, secrecy, as well as authenticity of messages is concerned. Additional techniques are needed to ensure the protection of authenticity and integrity of messages (Breton, 1999). At the outset, encryption of e-mails has to be accompanied by digital signatures at the point of their formation so as to ensure confidentiality of the information. Without such signatures, the sender has the right to argue that information was tampered with before encryption but after it had left their computer. Additionally, sending e-mails from outside the organization network by mobile users may not be practical when using encryption product. The utilization of encryption technique in protecting information may be challenging especially when a mistake is done while executing or designing the system. In such circumstances, unencrypted information may be accessed by adversaries even without decryption hence paving way for successful attacks. Moreover, poor handling of cipher keys also pose risks as far as protection of data is concerned. Such errors may enable adversaries get access to vital information on the communication (Breton, 1999).

There has to be trust developed between the sender and the recipient of the encrypted message so as to ensure the secrecy of the key thereby protecting it from interception by any adversary. If anyone intercepts the messages in a communication, s/he can forge or modify the information thereby exposing vital transaction information that may be used to sabotage the operations of the organization.

Evolution of old and current encryption practices

Originally, cryptography entailed concealing of information and subsequent revelation of such information to the legitimate users through utilization of a secret key. This involved the transformation of information from plaintext to cipher text via encryption and decryption respectively which ensured security of such data. Encryption technique only ensured the confidentiality of written messages during world war (Segev, Porra, & Roldan, 1998). However, similar principles have been found to auger well with the modern technologies. Encryption currently encompasses the protection of information stored in computers as well as those flowing between such electronic equipments (Segev, Porra, & Roldan, 1998).

Besides, signals from fax machines as well as TVs are also encrypted in addition to verification of participants identity in the e-commerce. When incorporated with other techniques such as digital signatures, encryption technique not only ensures confidentiality of messages but also the integrity as well as authenticity of the information in communication across networks. Generally, the revolution of encryption as a technology in protection of information is attributed to changes in information technology, e-commerce as well as internet use. Public key cryptography provides for secure exchange of information between individuals who have no prior security arrangements. It limits the sharing of private keys unlike public keys. This improves the security of information as anyone having the public key can only encrypt the message but not decrypt it (Segev, Porra, & Roldan, 1998).

Conclusion

Encryption has been an important technique in ensuring the confidentiality of information in a communication. This technique transforms information from its original form known as plaintext to ciphertext which requires special key to access. The encrypted information can not therefore be accessed by anyone else except the transmitter and the recipient who have the secret key. Consequently, the information is protected from interception. Developments in information, e-commerce as well as internet have made it necessary for the protection of data both on transit as well as stored information in the computers. Encryption technique is therefore vital for organizations as it enhances the security of information across networks. However, this technique may not be successful enough in securing information and therefore requires other techniques to restore the integrity as well as authenticity of messages in a communication (Segev, Porra, & Roldan, 1998).

Reference List

Brenton, C. (1999). Authentication and Encryption. Sybex, Inc. Web.

PGP. (2004). An Introduction to Cryptography. Web.

Segev, A., Porra, J., & Roldan, M. (1998). Communications of the ACM, 41(10), 81-87. Web.

Strong Encryption and Universalization Principle

Introduction

Nowadays, strong cryptography is a frequently used term interpreted as a means to exchange and protect information in the electronic world. The increased necessity to use cryptographic methods to hide some facts and make them readable for a certain group of people only, a number of ethical questions and global concerns take place. On the one hand, people want to protect their rights and get access to individual communication. On the other hand, the intentions of one group of people to use cryptography may become a real threat to another group of people. There is a kind of ethical dilemma, and Kants Principle of Universalization (the first version of the Categorical Imperative) offers one of the possible solutions to treat strong encryption as a morally permissible concept that is wrong to prohibit or make obligatory.

Kants Principle as a Method of Moral Evaluation

Deontological approaches help to comprehend the essence of moral obligations that is unconditional and the reasons for why people have to obey them even if the outcomes contradict personal interests (Kants Deontological Ethics 39). The Categorical Imperative developed by Kant has several versions, and the first version, the Principle of Universalization, explains how to act in regards to the maximum (e.g. the rule that is approved legally and socially) that can be introduced as a universal law.

To comprehend if the action is or is not universalizable, a person should stay impartial to everything connected with a particular case and develop judgments using dry facts and the universal law concepts. The Principle of Universalization informs about the impossibility to define actions as morally approved in case some contradictions can be found around. People have to be sure that all their actions, thoughts, and the outcomes of their activities do not contradict the laws people have to follow.

Strong Encryption according to Kants Principle

Strong encryption is an ethically challenged concept. Still, from an ethical point of view, it is a good activity the presence of which does not lead to some immoral outcomes. Still, if encryption is used by a criminal or a socially unstable person, whose decisions and actions may hurt other people, it may become a tool for the actions that are morally wrong.

Therefore, it is possible to say that according to the Principle of Universalization developed by Kant, strong encryption can be permissible regarding the nature of a person it is used by. It is wrong to pose some obligations on people to use encryption or prohibit to use it from time to time. At the same time, there are no prohibitions to make attempts and analyze the encrypted material. Such attempts made by special governmental representatives can be approved by the Universal Law as a possibility to promote a society with safety.

Conclusion

In general, it is hard to comprehend the concept of strong encryption in terms of ethical regulations and expectations. Each researcher and philosopher can develop various approaches on how to treat the opportunity to exchange encrypted information. The position of Kant offered in the Principle of Universalization helps to consider strong encryption as a morally permissible practice the quality of which depends on the nature and the psychological condition of a person, who tries to use it. People should have rights to make their independent decisions. The prohibition or obligation to use strong encryption contradicts the idea of the Natural Law. Still, permission is a neutral side that can be offered as the possible solution.

References

Kants Deontological Ethics. Three Ethical Theories n.d. 39-47.

Encryption, Information Security, and Assurance

Digital signature and its use

A digital signature is “a complex technique or procedure used to certify the integrity of specific digital content, software, or message” (Mele, Pels, & Polese, 2015, p. 128). The generation of a digital signature is something that follows a complex mathematical pattern or process. A process known as hashing takes place immediately after someone sends a specific message. The hashed message is “encrypted using the sender’s private key” (Waraich, 2008, p. 11). This encrypted message is then delivered to the receiver via the internet. The recipient will decrypt the hashed message using the sender’s public key. The two hashes are “then compared to establish if the message has been tampered with” (Waraich, 2008, p. 11). This analysis shows that the generation of digital signature follows the rules of encryption and decryption. The generation of digital signatures is, therefore, comparable to encryption whereby messages are coded and channeled to the receiver.

“For all practical purposes,” a particular encryption process is strong enough and safe

The presented assumption argues that encryption is a strong and effective process. This means that the encryption process is secure and can ensure documents and messages are delivered to the targeted recipients without being compromised. The process has the potential to decode messages and make them inaccessible to unauthorized parties (Mele et al., 2015). However, human errors and inappropriate implementation of encryption can make the data accessible to different users (Waraich, 2008). Modern technological advances continue to challenge the assumption. This is true because cyber-criminals can hack different systems and access unauthorized information.

New programs and systems have also emerged, thus threatening the security of encrypted information (Kim & Solomon, 2013). Some errors continue to weaken this assumption. For example, many departments and companies do not use advanced technologies to verify digital signatures and message authentication codes (MACs). Some users also fail to use multi-button applications in order to make their systems inaccessible. Additional security measures are also “avoided, thus weakening the manner in which encrypted information interacts with cloud systems” (AbuTaha, Farajallah, Tahboub, & Odeh, 2011, p. 299). The inability to use proper Key Management (KM) has also weakened the assumption.

The difference between public key infrastructure and private key infrastructure

A public key infrastructure (PKI) is “a set of policies, procedures, hardware, software, and people aimed at managing public-key encryption and create, revoke, or store digital certificates and manage public-key encryption” (Waraich, 2008, p. 2). On the other hand, a “private key infrastructure is a set of processes, hardware, and software for managing private encryption between a small number of authenticated users” (AbuTaha et al., 2011, p. 301). These approaches target different users, thus making public key infrastructures complex.

It should also be noted that such infrastructures are different from private and public keys. These two keys play a vital role in encryption. Private keys refer to “a decryption or encryption key known to specific parties that exchange secret information or messages” (Waraich, 2008, p. 4). A “public key can be defined as a value of encryption that is used to encrypt various digital signatures and messages” (AbuTaha et al., 2011, p. 308). These two keys are combined in order to improve the level of data security.

Information security and assurance, encryption systems, products, tools, and concepts

Information security (IS) is “a complex process that focuses on the best approaches to ensure transmitted information and data is secure” (Mele et al., 2015, p. 131). Information assurance focuses on both the digital and physical aspects of data. That being the case, encryption focuses on the digital aspect of the transmitted information. Waraich (2008) argues that “encryption products, tools, concepts, and systems, therefore, focus on confidentiality and integrity of the transmitted messages” (p. 19). Encryption safeguards the targeted information from unauthorized persons or processes. As well, integrity ensures “that the data is complete and accurate throughout its lifecycle” (Waraich, 2008, p. 29). Encryption, therefore, secures the integrity of the information or messages delivered to a specific recipient (Kim & Solomon, 2013).

That being the case, encryption systems cannot promote the effectiveness and physical security of different hardware systems. Encryption is “not a guarantee that a specific information system will serve its purpose” (Mele et al., 2015, p. 131). Encryption cannot safeguard data from physical attacks or disasters. This argument shows that encryption is one of the vital processes used to support information assurance and security.

Managers’ and users’ understanding of mathematics and use cryptographic systems

It is agreeable that encryption is based on number theory. This means that “many people and programmers will have to become more numerate and sophisticated in order to safeguard every data” (Waraich, 2008, p. 19). The knowledge will present the best mathematical approaches that can make it easier for organizations to use the most appropriate cryptographic systems. Such systems will ensure the organizations have the best information assurance and security processes. However, managers should not understand such numerical principles in order to safeguard the integrity of their organizations’ information.

Chief Information Officers (CIOs) and programmers should, therefore, be aware of the mathematics behind encryption. This knowledge will make it easier for them to develop powerful encryption systems that will safeguard the integrity of the transmitted data and information (Mele et al., 2015). The role of the managers is to ensure such systems are implemented properly. This approach will eventually safeguard every organization’s data and information.

Reference List

AbuTaha, M., Farajallah, M., Tahboub, R., & Odeh, M. (2011). Survey Paper: Cryptography is the Science of Information Security. International Journal of Computer Science and Security, 5(3), 298-309.

Kim, D., & Solomon, M. (2013). Information Security and Assurance Textbook: Fundamentals of Information Systems Security. Burlington, MA: Jones & Bartlett Learning.

Mele, C., Pels, J., & Polese, F. (2015). A Brief Review of Systems Theories and Their Managerial Applications. Service Science, 2(1), 126-135.

Waraich, R. (2008). 2-PKI: A Public and Private Key Infrastructure. ETH, 1(1), 1-39.