Attribute-based encryption and other encryption techniques
Attribute-based approach is an encryption technique that secures data security in the cloud environment. The features of attribute-based encryption include user key, cipher-text encryption, and user credentials. However, attribute-based encryption is derived for the traditional public-key encryption.
Unlike other traditional encryption techniques, attribute-based encryption has a collusion resistance property. As a result, an adversary must have the user key to access secured files. Attribute-based encryption enables login security to mitigate multiple collusion attacks (Xavier & Chandrasekar, 2013).
As a result, the ABE provides creates structural attributes for encrypted messages and file sharing. Thus, the authorized user with identical attributes can access or decrypt the encrypted files.
The ABE approach facilitated the creation of various hybrids, which include cipher-text policy ABE, key-policy ABE, attribute-based broadcast encryption, multi-authority attribute-based encryption, and distributed attribute-based encryption. Data analysts classify these encryption techniques based on its importance to data security.
Consequently, the limitations of each encryption techniques make it less acceptable. Public-key encryption is a primitive encryption technique used in cloud computing. The encryption technique lacks various scalable options. As a result, the user attribute is inefficient and difficult to manage.
Attribute-based encryption identifies and encrypts the user key with attribute sets. Consequently, the client can manage, monitor, and share the PHR using identity sets. However, user revocation attributes are not supported in the ABE (Xavier & Chandrasekar, 2013).
The features of the public-key encryption include cryptography, public, and private key. As a result, the user can decrypt messages with the corresponding private or public-key. Public-key encryption enables two user access keys to the secured file. As a result, the user must have a public-key and a private key.
The public-key can encrypt confidential data in a cloud server while the private-key decrypts the encoded message. However, the cipher-text relays the message to the output server. Thus, the key encryption tool is the most significant feature of the public-key technique.
As a result, the user can secure and authenticate data integrity using the private key. However, the public-key algorithm limits the encryption process. As a result, the user must conduct multiple algorithms to relay and receive encrypted messages.
The digital signature is another component of public-key encryption (Xavier & Chandrasekar, 2013). In digital signature, the authentication mechanism relays the encrypted message. The features of digital signature include direct and arbitrated digital signature.
The applications of public-key cryptosystems include decision support system, RSA algorithm, elliptic curve, and Diffie-Hellman key change. The limitations of the public-key technique include computation cost, collusion attacks, and vulnerable to brute force attack.
The ID-based encryption is primitive techniques used to secure and share files in the cloud environment. The features of the ID-based encryption include Email or IP address, user identify, and text-value. Consequently, the protocol framework for the IBE includes setup, extract, encrypt, and decrypt.
The drawbacks of the IBE include data compromise, unauthorized access, system incompatibility, and code attacks. Consequently, the sender must install the recipient’s signature to relay secure messages. The IBE technique creates multiple task management schemes. As a result, the security paradox exposes encrypted files.
Unlike the ABE technique, the IBE does not have an on-demand revocation. The cipher-text policy ABE enables data encryption using access policy. As a result, authorized user must provide an identical decryption key. The secret key attribute is the main feature of the cipher-text policy.
Encrypted data can be relayed by third-party servers without compromise. As a result, authorized users must have the matched policy key to access encrypted folders in the cloud environment. However, user revocation is impossible during collusion attacks and data compromise.
The multi-authority ABE creates multiple user access to data security. As a result, each user has a restricted domain for operation.
The multi-authority encryption techniques can be used by health organization, insurance institutions, banks, and financial houses. The server operator provides level access based on the user authority (Xavier & Chandrasekar, 2013).
Benefits of attribute-based encryption
Variations of the ABE have been used by various researchers to evaluate the significance of cloud computing. However, the challenges of data security support the application of different ABE variations. The ABE variations have been used in public domains to reduce cost.
As a result, the ABE have been used to test different data security services in the cloud environment. Surveys revealed that the ABE supports scalable and secure sharing in cloud computing. Li, Yu, Zheng, and Ren (2013) discussed the advantages of attribute-based encryption for the secure exchange of scalable records.
The research findings revealed that patient privacy and confidentiality can be secure at low cost in the cloud environment. Bethencourt, Sahai, and Waters (2012) used the cipher-text policy to test the ABE variations. The research findings revealed that the CP-ABE eliminated collusion attacks in cloud servers.
Lekshmi and Revathi (2014) tested the CP-ABE technique using a multi-authority approach. The multi-authority technique enables different user access to secure files. Attribute-based encryption method and its variations improve data security the cloud environment.
Lekshmi, V., & Revathi, P. (2014). Implementing secure data access control for multi-authority cloud storage system using cipher-text policy-attribute based encryption. Information Communication and Embedded Systems, 2(1), 1-6.
Li, M., Yu, S., Zheng, Y., & Ren, K. (2013). Scalable and secure sharing of personal health records in cloud computing using attribute-based encryption. Parallel and Distributed Systems, 24(1), 131-143.
Xavier, N, & Chandrasekar, V. (2013). Security of PHR in cloud computing by using several attribute based encryption techniques. International Journal of Communication and Computer Technologies, 1(7), 2278-9723.
Privacy and internet security is a subject of great interest in today’s world of globalization. The information we send or receive need to be in a most secured manner. Encryption is a form of coding the data into a form called a ciphertext which cannot be easily understood by unauthorized people. In order to decode the encryption, a decryption system is required that will be known only by the authorized receiver. Decryption is the method of converting encrypted data back into its original form that can be readable by the end-user. The system of encryption and decryption is gaining great significance in the world of wireless communications.
This is mainly because the wireless communication medium can be easily tapped and misused. Therefore, it is vital to use encryption and decryption systems especially to protect privacy. In general, it can be said that the stronger the ciphertext the better is the security (Bauchle, et al. 2009). This paper discusses the Pretty Good Privacy (PGP) encryption system in general and why it is good for both individual as well as organizational use.
Pretty Good Privacy (PGP)
Pretty Good Privacy is in general abbreviated as PGP and is a computer program that helps secure online transitions. Specifically, PGP provides cryptographic privacy and authentication. It is often used for signing, encrypting, and decrypting e-mails. Today, this has become one of the most reliable privacy systems that are easy and effective methods. PGP encryption works by making use of public-key cryptography which is in general linked with the public keys to a user ID or an e-mail address.
The popularity of PGP is increasing among individuals, organizations, and business for several reasons such as its confidentiality, authentication, integrity, double encryption, etc. It is essential to ensure that only the intended receiver reads the message and for this purpose, the message is encrypted using the receiver’s public key that can be only encoded by the receiver using his private key. Hence, confidentiality is high while using PGP.
PGP a Good idea for Individuals
There are several reasons that PGP is highly recommended for individual use. It is a foremost contributor of privacy solutions that avert the risk of unauthorized access to the digital property by defensive it at the source. Additionally, PGP gives individuals full protection tools to protect themselves against privacy breaches. It is understood that it is only with the use of PGP’s technology that individuals make the assessment concerning the information that needs to be released about them, and also it gives the individual all rights to determine what information to be released. In fact, the latest versions of the PGP cookie.cutter help individuals to securely surf the Web, without any worry that information about them can be tracked by unauthorized users (ftc.gov, N.D.).
PGP has also helped to form a web of trust. For instance, if one user knows another user’s certificate is valid, then he can sign the other user’s certificate. Therefore, the group of individuals having trust in the first user will automatically trust the second user certificate. One of the greatest advantages of PGP is that it allows an infinite number of users to sign each certificate. In other words, a network or web of trust is developed as more and more users vouch for each other’s certificates (networkcomputing.com, 2009).
PGP a Good idea for Organizations
Several organizations use the PGP system for the safe transfer and storage of information. In addition to shielding or protecting data in transfer over a network, PGP encryption system is also effectively used to protect data storage for a longer period of time.
This is of great significance when it comes to organizations because it is possible to store private data in a most secured manner. The latest version of PGP is much more beneficial for the organizations as it has added additional encryption algorithms. In other words the degree of their cryptographic vulnerability differs with the algorithm used and in most of the cases the algorithms used in recent years is not publicly known to have cryptanalytic weaknesses (Wikipedia, 2009). Hence, it is always recommended to use a good encryption system such as PGP to ensure organizational privacy.
PGP provided the opportunity for authentication. The sender can encrypt the memo by means of the sender’s public key and also his own private key. This is known as authentication. On the other end the receiver of the memo then decrypts it using the sender’s public key and his own private key. Given that the receiver has to use the sender’s public key to decrypt the massage, only the sender could have encrypted it using his private key. PGP also uses digital signatures to integrate the memo. The digital signatures help to further authenticate the memo.
The security of PGP encryption system is comparatively high because of the utilization of double encryption. It is through a combination of both symmetric and asymmetric encryption that PGP ensures high security and also high speed. It is of great significance to the organizations because PGP allows the sender to send a single encrypted message to various recipients. At the same time it does not require to re-encrypt the entire data. However, if the sender were using a conventional asymmetric encryption system, then it would be even more difficult task to re-encrypt the whole memo for each recipient individually. This is not only time consuming but is also a tedious job.
Conclusion
PGP encryption is comparatively cheaper both for individual as well as organizational use (Alchaar, et al. N.D.). In fact, the commercial licence draws payment of a small amount. Since it is easy to uses for local file encryption, secure disk volumes and network connections, organizations find much more usefulness of PGP. PGP is a good option for both individual and organizational use as it is highly impossible to crack the double encryption of PGP. It is essential to understand the need for strong encryption as it is the only means to boost up the security and prevent crime. A strong encryption does not allow the easy tracking and therefore prevents the misuse of personal details.
References
Alchaar, H., Jones,J., Kohli, V. and Wilkinson, K. (N.D.) Encryption and PGP. Web.
Bauchle, R., Hazen, F., Lund, J., Oakley, G. and Rundatz, F. (2009) What is encryption? Web.
In the field of computing, the past 20 years or so have been marked by an increase in the capability of the devices and a massive increase in the use of computers connected via networks to carry out business and related tasks. The processing power of desktop computers has increased almost 100 times. The processing power of an average desktop computer today is in the range that was only dreamed about in the ’80s. Alongside this development, people began to use computers to do much more than word processing, preparation of spreadsheets, and simple database tasks that characterized the MS-DOS era. Computer networks link banks, schools, hospitals, government agencies, and people, making work much easier to accomplish. As populations become more reliant on these devices and networks, crime has also begun to emerge due to vulnerabilities. In this paper the discussions presented will introduce a concept in modern computer networks namely Wi-Fi. The discussions will briefly highlight how this concept is implemented and focus on the threat caused by increased insecurity caused by high-powered Graphics processing Units.
The Introduction of Wireless Technology
Networks and networking are commonly used terms in the field of computing. This term often refers to a connection of various computers and devices through the use of communication channels. Networks are important because they increased efficiency by allowing users to share resources. For example, in an office, it is common to see a single printer used to serve many computers or workstations. This is made possible via a network that relies on wired or wireless technology to provide the services of printing to the various computers at the same time. In the absence of this network, each computer would have to be attached to a separate printer thus increasing operating costs.
The advent of the internet saw a vast increase in the use of the internet. The internet is a global network of computer networks that brings together governments, learning institutions, commercial and other agencies together, thus allowing a large pool of easily accessible resources to millions of people all over the world. As more and more people began to use the internet to meet various daily needs the computer industry was under a lot of pressure to improve the quality of networks. The gradual process of improvement led to the type of networks that this paper will focus on, namely, wireless networks or Wi-Fi.
As earlier stated, a computer network provides a communication backbone through which various computers and peripheral devices can be shared. As the name suggests a wireless network provides users with the advantage that connections from one point to the next, do not require cables. The wireless network is thus much easier to set up and the lack of wires reduces maintenance costs. These networks make use of remote information transmission through electromagnetic waves such as radio waves. In recent years the telecommunication industry has also grown and a new and popular type of wireless network exists in the domain of cellular networks which can transmit voice and data over-improved channels. These wireless networks have become very popular across the developed world and it is not uncommon to find these “hotspots” in coffee bars, airports, colleges, train and bus stations, etc. They offer people great flexibility but may be capable of putting the unsuspecting would-be users in harm’s way. It is primarily for this reason that entrepreneurs interested in using this technology for their business need to be aware of the security risks that such networks imply. For example, within an unobstructed space, a wireless network can travel as far as 500 meters, including up heating or elevator shafts (Williams, 2006). It is difficult to ensure that the signals will not travel further than the business space they are meant to cover. Initially, the networks relied upon the Wired Equivalent Privacy (WEP) standard to provide security to the data that was being transmitted to deter interception. WEP in its basic form made use of 40-bit static keys and RC4 encryption to provide the security equivalent to that provided on a wired network. The fact that wireless networks do not need an access point to access data made this approach slightly inefficient. An improved approach was then developed, namely, Wi-Fi Protected Access (WPA), that utilizes an 8 bit MIC that ensures no tampering with data being transmitted (Williams, 2006).
In this paper, we will discuss an emerging technique that compromises the wireless network through the use of Graphics Processing Units (GPU). These new Visual Graphics Adapters have in place several general-purpose processors as opposed to special-purpose hardware units that characterized their predecessors (Mariziale, Richard III & Roussev, 2007). It is in light of such threats to wireless networks that this paper seeks to demonstrate the possible risks underlying the use of wireless networks for commercial purposes.
Wireless Weaknesses in WEP
The Wired Equivalent Privacy standard or WEP is utilized in the IEEE 802.11 protocol and is known to possess serious security flaws that thus make the network vulnerable to malicious attacks and intrusion. This poses cause for concern given that wireless devices are proliferating rapidly and it is expected that they will soon surpass the volume of traditional wired clients. The main driver behind the proliferation lies in the need for businesses to cut costs and improve the delivery of service. Currently, wireless networks bring together devices ranging from embedded microdevices to larger general-purpose PCs. The price of networking has reduced and the speeds available have increased; people are increasingly dependent on these networks to perform works and other routine tasks e.g. bill payments, making reservations, etc (Kocak & Jagetia, 2008).
However, the security of the data and privacy of Wi-Fi networks remains questionable. This seeks to bring to light that almost any unauthorized user with know-how can access, modify or use the data being transmitted over a Wi-Fi network. It is, therefore, no surprise that as these networks grow and people begin to store and share more important information, hackers have begun to prey on unsuspecting users. Such instances have led to an increase in research into the security of these wireless networks in recent times. It is important to note that WEP is harder to implement on microdevices that possess low processing power and memory capacity (Kocak & Jagetia, 2008).
As earlier mentioned WEP operates in compliance with the IEEE 802.11 standard for wireless networks. This standard forms the basic over-the-air interface that is used between a wireless client and a base station or even two or more wireless clients. The standard became operational to unify protocols of operation and promote interoperability between devices manufactured by different companies. The standard is characterized by a high data rate and simple encryption technique which made it very popular. One of its major shortcomings is it mainly addresses the physical layer which is mainly concerned with easing the process of transmission between devices. The security of the data and access controls are poorly handled thus leaving a major loophole for would-be attackers. The WEP protocol has been found to have serious flaws owing to the easily broken cryptography techniques utilized in the process of data transmission (Kocak & Jagetia, 2008).
Since WEP is intended to provide the same security as that available on a wired network it utilizes a shared key authentication technique to identify stations and clients. In a wired network, this key is never transmitted in the open but in the wireless network there is no “entry point” and the key is virtually in the open. To facilitate shared key authentication, the network will convey both the challenge and the encrypted challenge over the media (airwaves). With both these in hand, it is possible to make attempts and find the pseudo-random number that is used to create the key/IV pair. In WEP the same key will be used in encoding and decoding a message and therefore once the key/IV pair that was used for the exchange has been computed the message is no longer secure from prying eyes. This fact is best illustrated through the use of software that can be used to passively monitor the encryption key and make attempts at deciphering this key once enough packets of data have been gathered. Some available product versions of such software accomplish the deciphering of the RC4 algorithm in as little as 15 minutes depending on the volume of data on the network. On networks with higher volume, the task is accomplished faster; it requires 1GB of data to decipher the algorithm (Computer Security & Fraud, 2001).
Attacks against WEP: Types Used (Theoretical and Technical Description)
From the details provided in the section above it is clear to see that WEP can be easily compromised and hence more stringent security is required to secure a wireless network. The attacks that can be made to a WEP network can be classified as either direct or passive. In the case of direct attacks, the attacker modifies the contents of the data being transmitted over the network. This happens because any data packet traveling along these networks contains a short 24-bit key used for identification. With a key, this small, repetition is bound to occur within fairly short intervals thus creating an opportunity to “grab” a key and use it to intercept data. In the case of passive attacks, the attacker violates the integrity of the network by “sniffing”. Sniffing is a process that involves analyzing the keys being used to identify the repeated keys and begin the process of redirecting the information to the attacker. Another passive approach involves the use of tables to decrypt all the data being transmitted on a network. Both these modes of attack rely on the amount of traffic on the network. Therefore, the heavier the traffic the quicker these attacks are accomplished. The WEP security is very vulnerable and will most likely not accomplish its goals if the attacker is well informed on its weaknesses. This fact has been proven by the numerous tools that have been developed to crack into such networks (Kocak & Jagetia, 2008).
The Migration to WPA and WPA2 Encryption
The failures of WEP have not gone unnoticed and the result has been two additional security alternatives namely WPA and WPA2. Wi-Fi Protected Access or WPA was developed as a short-term solution to the problems that arose from the use of WEP. WPA was designed specifically for compatibility with hardware that was capable of supporting WEP. Unlike WEP which was developed in compliance with IEEE 802.11 standards, WPA does not fall under any ratified IEEE standard. The WPA protocol provides an improved key management scheme known as the Temporal Key Integrity Protocol (TKIP). This protocol was a great improvement from WEP although the implementation required some upgrading of the access points. This ceased to be an issue after 2003 when most client and access point hardware incorporated the technology into their products. The algorithm used in the encryption of data is similar to WEP but the length of the key has been increased to 48 bits (Rowan, 2010). The large size of this number makes it difficult to cause a collision of data packets. In addition, the protocol has a second data layer that protects against packet replay. This removes the introducing packets and triggers key collision as is commonly practiced by hackers in WEP. In WPA if the algorithm in use detects packets with a similar key within sixty seconds of each other it shuts down the network for sixty seconds. WPA in practice supports operations either in Pre Shared Key mode or Extensible Authentication Protocol. In Pre Shared Key Mode both sides communicating need to know the key which can be sixty-four hexadecimal units or a password within the range of eight to sixty-three characters. If a weak Pre Shared Key is chosen WPA is prone to brute force attacks using lookup tables and increased processing power to speed up the cracking process. The Extensible Authentication Protocol improves the identification of clients but is out of reach for most users who do not want to spend significant sums of money buying the required equipment (Rowan, 2010). These flaws resulted in improvements and brought about WPA2 which fully complies with the IEEE 802.11i standard. Under WPA2 the solution to TKIP appeared to be fully secure but most manufacturers are yet to incorporate the required software upgrades (Rowan, 2010). It may be argued that WPA2 should be enforced even if it requires compromising the compatibility of devices because it offers the best security.
Attacks against WPA using brute force with VGA GPU Power
As is the case with all new developments, over time, vulnerabilities are discovered and a secure environment becomes insecure owing to this knowledge. In the case of WPA which was once considered the answer to security issues in Wi-Fi networks, the vulnerable point is in the encryption which can be broken through the use of powerful Graphics Processing Units (GPUs). Before this era in computing the GPUs only processed graphics content. However, due to the large increase in the capability of these devices manufacturers considered means to use the power for other nongraphic applications (Mariziale, Richard III & Roussev, 2007). Take the case of NVIDIA 8800 GTX which theoretically can perform 350 GFLOPS and cost a buyer $570 in 2007. ON the other hand, an Intel 3.0 GHz dual-core processor could only handle 40GFLOPS, and yet it cost $266. This translates to approx. $1/GFLOP for the 8800 GTX and approx. $7/GFLOP for the duo core processor, making the GPU much cheaper when the cost is compared with performance (Mariziale, Richard III & Roussev, 2007). Another advantage of the GPU lies in the large memory bandwidth which far exceeds that of the regular processor, 86Gbs to 6Gbs. This in itself is more than enough reason to want to maximize the potential of the GPU.
To enable one to harness the power of such a GPU the software has to be developed using one of the few API that is capable of interacting with the hardware. In the case of graphics programs, it may be worth considering utilizing OpenGL or Direct3D (Mariziale, Richard III & Roussev, 2007). However, for tasks such as breaking WPA, the software includes general-purpose languages such a C for Graphics or Cg. These are high-level languages based on C and also contain features that make them suitable for GPU programming. In the experiment for this case, the CUDA (Compute Unified Device Architecture) SDK was used to program the 8800 GTX GPU. The 8800 GTX operates on a principle of Single Instruction Multiple Data, which is possible using the set of stream processors that are built into the hardware. Once an instruction is issued in the kernel each processor runs a set of threads on its stream processors. The result is there are n processors available to complete a task; where n = the no. of multiprocessors X the no. of stream processors within a multiprocessor. Taking the case of the 8800GTX it has 16 multiprocessors and each multiprocessor has 8 stream processors, thus bringing a total of 128 processors (Mariziale, Richard III & Roussev, 2007). It is this huge increase in processing capability that is referred to when brute force is used to break the WPA keys.
Having discussed briefly the power of the GPU, some information on CUDA SDK should be useful in understanding the procedure of code-breaking in WPA. CUDA programs are prepared in C or C++, with specific extensions, and are compiled using a unique (nvcc) compiler in Windows or Linux (Mariziale, Richard III & Roussev, 2007). The CUDA program executes in two separate components namely, host and GPU. The Host component issues instruction on what operations to perform, while the GPU component creates the threads and rapidly completes the instruction. In addition to this, CUDA provides functions for memory management, controlling the GPU, support for OpenGL and Direct3D, and texture handling. The CUDA program alongside the GPU provides a single cost-effective boost to the processing power of the computer system.
The approach also has its limitations which include maximizing the use of shared memory, limiting access to global memory, and preventing serialization of threads running on the GPU. Depending on the application running these are limitations that are bearable when weighed against the results obtained and time saved. With such increases in power one may wonder why the GPUs have not yet come of age and replaced the regular processors for general-purpose computing. Several reasons lie behind this; for instance, floating-point numbers are generally non IEEE compliant and until fairly recently that standard offered no support for integer arithmetic (Mariziale, Richard III & Roussev, 2007). The huge increased power results require the use of floating-point numbers making their implementation in general-purpose computing using integer arithmetic difficult. Another problem lies in the fact that GPUs are largely parallel by nature and at each branching operation the GPU incurs an additional cost on resources. As the threads diverge the GPU begins execution in a serial manner that defeats their intended purpose (Mariziale, Richard III & Roussev, 2007). It suggests algorithms need to be developed to ensure a more parallel mode of operation. This should not be taken to mean the GPUs are inefficient but rather, the GPU is best used to handle processor-intensive tasks such as code-breaking leaving the processor free to handle other tasks. If the GPU were to operate as the main processor as the threading increases eventually the tasks of lower priority would end up locked out until the executing process terminates. Another shortcoming lies in the fact that the APIs used for programming for GPUs are still not yet very suitable for general-purpose programming. This is because they were specifically designed to handle coding for graphic applications and are ill-suited for other purposes (Mariziale, Richard III & Roussev, 2007). The GPU technology in various graphic cards proves that the power of these devices can be enhanced to improve the computer system performance. This case of their use in breaking the keys used in wireless internet bears witness to that and provides future developers with useful insight on the way forward for network security.
Conclusion
In this paper, the discussion presented has revolved around Wi-Fi technology and the issues surrounding the security of such networks. The internet which is in practice a global network has greatly added value to the lives of millions of people all over the world and continues to grow. For example, an individual interested in education today will have access to institutions all over the world and will be able to tap into the knowledge he or she desired even without traveling. Through the use of social networking sites such as Facebook and Twitter people all over the world can interact and share ideas and experiences. An individual interested in buying and selling stocks on Wall Street can be just as successful today whether they are in a remote village in Sudan or living in Manhattan. Its contributions to humanity as yet can not quite be gauged but as with any innovation, it has raised new issues as well.
The security issues highlighted within the paper are proof of the vulnerability the users of this great breakthrough are exposed to regularly. It is for this reason that fast and conclusive action should be taken to lock down the loopholes that exist within the networks that are so useful and serve so many purposes. Anyone with knowledge of the vulnerabilities within such a system must make effort to guard against the possibility of any hazard that may emanate from using the network for any purpose. It is also encouraging to note that the hardware manufacturers involved in production are constantly improving the devices they offer to improve performance and reduce operating costs. Even though our systems are vulnerable such action reflects the great and bright future ahead.
Hunton, P. (2009). A Growing Phenomenon of Crime and the Internet: A Cybercrime Execution and Analysis. Computer Law & Security Review, 25, 528-535.
Kocak, T., & Jagetia, M. (2008). A WEP Post Processing Algorithm for a Robust 802.11 WLAN Implementation. Computer Communications, 31, 3405-3409.
Mariziale, L., Richard III, G. G., & Roussev, V. (2007). Massive Threading: Using GPUs to Increase Performance of Digital Forensic tools. Digital Investigation, 4, 73-81.
Rowan, T. Negotiating Wi-Fi Security, Network Security, 2010, 8-12.
Williams, P. Cappuccino, Muffin, Wi-Fi – But What About the Security? Network Security, 2006(10), 13-17.
Technological advancements exhibited in the communication industry calls for stringent measures to ensure security of information transmitted in the distribution channels. Such information is protected by transforming the messages from their original readable text to more complicated form known as ciphertext which requires special knowledge to access. This encryption technique ensures confidentiality of information as only the transmitter and the recipient have access to the secret key needed to decrypt the message (Breton, 1999). Encryption has been successfully employed by many governments as well as militaries in enhancing the secrecy of their communication. It is currently utilized in many civilian systems to protect data both in transit as well as stored information. Data stored in computers or other storage devices such as flash discs can be protected against leakages through encryption. Encryption of data on transit is also necessary in protecting such information from interception during communication through telephone, internet among other communication systems (Breton, 1999).
Applications of encryption
Pretty good privacy (PGP)
This is one of the encryption applications developed in early nineties by Zimmerman to enhance cryptographic security of the information transmitted. PGP is a cryptosystem encompassing both the public key as well as conventional cryptography and is meant to compress information transmitted when plaintext is encrypted with the PGP. As a result, both space and time are effectively utilized (PGP, 2004). Encryption of plaintext with PGP enhances resistance to cryptanalysis since compression eliminates patterns in the plaintext which are always exploited by such techniques in cracking the cipher. Subsequently, PGP creates a secret key which encrypts the plaintext into ciphertext aided by a fast and secure conventional encryption algorithm. This key is encrypted to the public key of the recipient and transmitted to the recipient along with the cipher text (PGP, 2004).
In decryption, session key is recovered by private key using the recipient’s copy of PGP. This key is thus used to decrypt the cipher text thereby making it readable (PGP, 2004).
“Smart” credit card
Smart card has an in-built microprocessor necessary for verification process. Anyone using the card has to ascertain his identity any time a transaction is made. The card and the reader execute a chain of encrypted signs to confirm that both the parties are genuine as far as transaction is concerned. Such transaction is performed in encrypted form to enhance security of the information (Breton, 1999). As a result, chances of parties defrauding the system are minimized. Such cards are currently used in many businesses in U.S as well as Europe.
Personal Identification number (PIN)
This is a coded identification number that is inserted into the automatic teller machine together with the bank card to ascertain the legitimacy of the bearer before carrying out a transaction. The PIN is stored in an encrypted form on the ATM card or in the computers in the bank. Given the PIN and the bank’s keys, it is possible to compute the cipher but not the reverse since such transformation is a one way cryptography. This system ensures protection of information against leakages or even interception by adversaries (Breton, 1999).
Secure Electronic Transaction (SET)
This is a procedure developed by Visa and MasterCard that utilizes public-key system to enhance security of the payment transaction in a business. This protocol restores data integrity in addition to its confidentiality. Moreover, it also verifies the authenticity of cardholder as well as the merchant. Leakage of information as a result of use dual signatures is highly unlikely in this protocol (Segev, Porra, & Roldan, 1998).
Implications of encryption on organizations
Various corporate businesses as well as private ventures depended on pure information in the late 20th century as a result of transition witnessed in the communication industry. This entailed better access to affordable communications as well as capability of such ventures in obtaining, storing, and distributing infinite amount of information. Instances such as e-banking, personal computers, e-commerce and internet use are some of the developments of the revolution that influenced every aspect of business activities in the aforementioned era (Segev, Porra, & Roldan, 1998). Cryptology has been fundamental in the protection of information during communication especially in the above mentioned instances. It is therefore noteworthy that cryptology extends beyond provision of secrecy to encompass protection of information integrity against interception by adversaries. In e-commerce for instance, the transactions between the customer and the merchant are protected through encryption so as to restore confidentiality of the information. Moreover, the merchant is assured of full payment as the information concerning transactions is protected and the customer can not claim otherwise (Segev, Porra, & Roldan, 1998).
As stated before, the science of encryption has been helpful not only in ensuring secrecy and confidentiality of the information but also in restoring integrity of any transaction across corporate networks. Besides, encryption also helps in verifying the authenticity of messages in a communication. According to PGP (2004) conventional encryption is both fast as well as convenient in the protection of stored data.
However, products formed from encryption may not be perfect as far as protection of the integrity, secrecy, as well as authenticity of messages is concerned. Additional techniques are needed to ensure the protection of authenticity and integrity of messages (Breton, 1999). At the outset, encryption of e-mails has to be accompanied by digital signatures at the point of their formation so as to ensure confidentiality of the information. Without such signatures, the sender has the right to argue that information was tampered with before encryption but after it had left their computer. Additionally, sending e-mails from outside the organization network by mobile users may not be practical when using encryption product. The utilization of encryption technique in protecting information may be challenging especially when a mistake is done while executing or designing the system. In such circumstances, unencrypted information may be accessed by adversaries even without decryption hence paving way for successful attacks. Moreover, poor handling of cipher keys also pose risks as far as protection of data is concerned. Such errors may enable adversaries get access to vital information on the communication (Breton, 1999).
There has to be trust developed between the sender and the recipient of the encrypted message so as to ensure the secrecy of the key thereby protecting it from interception by any adversary. If anyone intercepts the messages in a communication, s/he can forge or modify the information thereby exposing vital transaction information that may be used to sabotage the operations of the organization.
Evolution of old and current encryption practices
Originally, cryptography entailed concealing of information and subsequent revelation of such information to the legitimate users through utilization of a secret key. This involved the transformation of information from plaintext to cipher text via encryption and decryption respectively which ensured security of such data. Encryption technique only ensured the confidentiality of written messages during world war (Segev, Porra, & Roldan, 1998). However, similar principles have been found to auger well with the modern technologies. Encryption currently encompasses the protection of information stored in computers as well as those flowing between such electronic equipments (Segev, Porra, & Roldan, 1998).
Besides, signals from fax machines as well as TVs are also encrypted in addition to verification of participants’ identity in the e-commerce. When incorporated with other techniques such as digital signatures, encryption technique not only ensures confidentiality of messages but also the integrity as well as authenticity of the information in communication across networks. Generally, the revolution of encryption as a technology in protection of information is attributed to changes in information technology, e-commerce as well as internet use. Public key cryptography provides for secure exchange of information between individuals who have no prior security arrangements. It limits the sharing of private keys unlike public keys. This improves the security of information as anyone having the public key can only encrypt the message but not decrypt it (Segev, Porra, & Roldan, 1998).
Conclusion
Encryption has been an important technique in ensuring the confidentiality of information in a communication. This technique transforms information from its original form known as plaintext to ciphertext which requires special key to access. The encrypted information can not therefore be accessed by anyone else except the transmitter and the recipient who have the secret key. Consequently, the information is protected from interception. Developments in information, e-commerce as well as internet have made it necessary for the protection of data both on transit as well as stored information in the computers. Encryption technique is therefore vital for organizations as it enhances the security of information across networks. However, this technique may not be successful enough in securing information and therefore requires other techniques to restore the integrity as well as authenticity of messages in a communication (Segev, Porra, & Roldan, 1998).
Reference List
Brenton, C. (1999). Authentication and Encryption. Sybex, Inc. Web.
PGP. (2004). An Introduction to Cryptography. Web.
Bibliofind has experienced security breaches on its information systems. The firm lacked adequate security that could prevent unauthorized intrusion into its systems. The firm needed to encrypt its systems to prevent hackers getting access to privileged information. This paper discusses how encryption would have strengthened Bibliofind’s information security.
Encryption ensures information that is shared within a system is coded to prevent access by an unauthorized party. The mode in which the message is coded allows only the sender and the receiver to decipher it. The text sent in the message cannot make any meaning to a person it is not intended for.
Bibliofind needed to encrypt all the messages that were shared through its web servers to safeguard them. Passwords and other information about the company needed to be protected from any threat of intrusion. The company should have secured its databases, web servers and other data crucial for its existence.
The company should have used digital signatures when doing business transactions with its clients. This would have prevented unauthorized third party users from becoming aware of the contents of messages exchanged. This approach would have made the company not to fall victim to fraudsters’ schemes.
Bibliofind would have gained the trust of its users by making its business transactions and e-commerce processes foolproof. This would have made it difficult for unauthorized parties to decipher the nature of business transactions it had with its clients. The use of digital signatures and encrypted messages would have made the company’s web transactions more confidential.
Bibliofind needed to analyze the strength of its web servers. Weaknesses in these servers should have been tested to determine which areas needed to be secured the most. Directory listings containing sensitive file names needed to be protected from intrusion by third parties.
Files containing passwords, identities and privileged information of users on internal servers needed to be encrypted to avoid intrusion. The company’s problems stemmed from exposure of its user accounts and passwords to unauthorized users who used them to access more sensitive information it held.
The web servers should have been programmed to control access to web users by authenticating their user certificates. The intruders easily became aware of all operations that the firm carried out internally and externally. Usernames, passwords and personal information of users should have been kept in a different highly secured database.
The systems should have been equipped to be able to verify sources and identities of users seeking access. The systems needed keys for all users that had access to the server. The administrators of the servers should have sensitized users to use passwords which are difficult to decipher.
Hackers became aware of the private details of clients and other external users the firm had regular contact with. Bibliofind needed to have access controls to its databases to limit the threat of intrusion or online attacks. The company’s databases contained privileged information which was not supposed to be displayed to unauthorized users.
The encryption solutions used should have been applied to all other areas with potential vulnerability such as the web servers, passwords, emails and the database. The encryption programs chosen by the company should have been vetted to ensure their suitability for the firm’s operations.
In conclusion, Bibliofind experienced major losses because of failure to secure its information systems. The company should have encrypted its systems to avoid being compromised by unauthorized users.
People can be divided according to different issues. In regards to “strong encryption” dependence, people are divided in those, who encrypt, and those, who do not encrypt. Nowadays, a number of discussions and misunderstandings take place around the term “strong encryption” and its importance for society. Many American states and several European countries support the idea of banning strong encryption. However, there are also such countries like Netherlands that are eager to give many facts in favor of strong encryption. The point is that not all people are aware of this term and its characteristics. Therefore, before defining strong encryption as a moral, legal, or even economic issue, it is necessary to comprehend it as a matter of social convention and make sure that all people know what social encryption means and how it may influence a human life.
General facts about strong encryption
It is easy to find a definition of strong encryption online and get an idea of what it means. However, even if the definition is memorized, not many people are able to comprehend its essence, worth, and possible impact on society. Strong encryption is a type of communication that can be protected and hidden against any kind of cryptographic analysis and available and readable to an intended group of people only. People try to invent and develop new programs and methods on how to recode information in such messages. However, strong encryption aims at creating a powerful protection. Much time and effort are necessary to decrypt the required portion of information. As a rule, not all attempts to decrypt information are successful.
Strong encryption and morality
The governments of many countries and the representatives of several American states admit that strong encryption is a serious threat to people and their security. In the light of the terroristic attacks in Paris and California, the Obama administration starts making several attempts to ban encrypted communication (Peterson par. 10). Politicians and military representatives want to change the conditions under which people may communicate and use the idea of safety measurements to support their positions. However, it is necessary to remember that the Internet belongs to people around the whole globe, and the governments do not have the rights to control it or define the conditions under which people may use it (Peterson par. 2). Many organizations try to raise the importance of moral issues in banning strong encryption.
Strong encryption as a matter of social convention
Still, it is necessary to understand that not all people know enough about strong encryption and the abilities people can get with it. Many people continue using the Internet as they did it several years ago and enjoy the possibilities they get. To make strong encryption an ethical issue, it is necessary to introduce strong encryption as a matter of social convention first. People should know as much as possible about the positive and negative aspects of strong encryption and make their independent decisions about the importance of its support or banning. American is one of the countries that have been promoting the idea of personal freedoms for a long period of time. As soon as the government of the country sees the threat of freedom, it tries to ban it in a short period. Such example should bother society and make people think about other opportunities that can be banned by the government as soon as they are identified as a threat.
Conclusion
In general, society should understand that the intentions to gain control over everything are useless. It is not an attempt to save people. It is just the way to make people think about other more dangerous ways to overcome the law and achieve the goals set.
References
Peterson, Andrea. “Debate over Encryption Isn’t just Happening in the US.” NZ Herald (2016). Web.
This paper proposes a lightweight image encryption scheme based on three stages. The first phase incorporates the use of the Rössler attractor for the Rössler system; the second stage includes the usage of a PRNG S-Box, while the third stage concerns the implementation of the Recaman’s Sequence. The performance of the proposed encryption scheme is evaluated by implementing pre-determined metrics. The metrics’ computed values indicate a comparable performance to counterpart schemes from the literature at a meager cost of processing time. Such a trait suggests that the proposed image encryption scheme possesses the potential for real-time image security applications.
Introduction
The tremendous evolution in digital image processing and network communications has created an extensive demand for real-time secure image transmission over the Internet and through wireless networks [1]. From these considerations, data security in cryptography and steganography has become a vital means to ensure safe operations and the usage of millions of online applications [2]-[4]. Cryptography, which plays a critical role in information security, has captured the attention of scientists and engineers with its contribution to research in recent decades [5]–[7]. Furthermore, the current studies focus on refining the security of image transmission. For instance, new cryptosystems, including the cellular automata and chaos, have been proposed. Chaos is characterized by pseudo-randomness, ergodicity, and high sensitivity to initial conditions and parameters, so it is extensively used in image encryption schemes. The outcomes of the described approaches have commonly involved the usage of one or more PRNGs, as well as true RNGs. The literature incorporates examples of pooling chaos theory [8], Recaman’s sequence [9], electrical circuits [10], quantum physics [11], and many others.
The Rössler system is a third-order continuous-time system with a single quadratic cross-term and depends on three parameters originally introduced by Otto Rössler in the 1970s [12]. These differential equations create a continuous-time dynamical system that calculates chaotic dynamics associated with the fractal properties of the attractor [13]. The calculated characteristics commonly concern the generation of a single lobe chaotic attractor (spiral-type) following a period-doubling cascade of a limit cycle or a more complicated chaotic attractor (screw-type) due to the presence of homo-clinic orbits [14]. Some properties of the Rössler system can be derived from linear methods such as eigenvectors; however, the system’s main features require non-linear processes, such as Poincaré maps and bifurcation diagrams. The original Rössler paper stated that the Rössler attractor was designed to operate similarly to the Lorenz attractor; moreover, it was also easier to analyze qualitatively [13].
An S-Box (substitution-box) is a fundamental component of symmetric key algorithms that perform the substitution process in cryptography. In block ciphers, they are typically utilized to obscure the relationship between the key and the cipher text, thus, ensuring Shannon’s property of confusion. The first S-box is used on symmetric key algorithms, such as Advanced Encryption Standard (AES) and the Data Encryption Standard (DES); however, one of the primary concerns of the described S-boxes is the statistic behavior. Therefore, PRNG and Chaotic systems are implemented to assemble an S-box and produce dynamics [15], [16]. The authors of [17] introduced a novel approach to constructing S-boxes based on the Rössler system and demonstrated the approach’s effectiveness against extensive attacks.
The Recaman’s sequence is an impressive sequence of integers that are simple to define; however, the ultimate complexity exhibits how effective it can be against cryptanalysis. The authors of [18] used Recaman’s sequence image steganography for 2D images, demonstrating the extensive results of the proposed scheme.
Ultimately, the current paper proposes an image encryption scheme based on three consequent stages. The first phase incorporates the usage of the Rössler system, the second stage concerns the construction of an S-Box, and the third stage emphasizes the usage of the Recaman’s Sequence. The current paper has the following structure: Section II briefly presents the overview of the Rössler system, PRNG S-Box, and the Recaman’s Sequence used for the proposed image encryption scheme. Consequently, Section III outlines the numerical results of the computations and testing and provides appropriate commentary on them. Ultimately, Section IV concludes the paper and proposes a direction for future work.
The Proposed Image Encryption Scheme
As mentioned briefly before, the proposed image encryption scheme is comprised of three stages. The first stage utilizes the Rössler system; the second phase concerns the construction of the PRNG substitution box (S-Box); the third stage emphasizes the Recaman’s sequence. The following sections introduce each concept and provide a brief overview of the systems.
Rössler Attractor
The Rössler system is a widely spread prototype of a continuous dynamical system defined by the following set of three nonlinear equations:
where a, b and c are non-negative parameters. The system approaches chaos through a period-doubling bifurcation route.
In our encryption scheme, the used parameter values are: a = 0.1, b = 0.01, and c = 14, resulting in the chart demonstrated in Fig. 1. A 2D representation for Rössler attractor points are shown in Fig. 2.
S-Box
A substitution box is a pivotal constituent of modern-day block ciphers that assists in the transformation of a disorderly cipher text into the specified plain text [19]. Incorporating the S-box, a nonlinear mapping among the input and output data is established to create confusion [20]. Thus, the security of data relies on the substitution process. Substitution is a nonlinear transformation that performs confusion of bits [21]. It provides the cryptosystem with the confusion described by Shannon [22]. An S-box generally takes m input bits and transforms them into n output bits. The system is called an mn S-box and is often implemented as a lookup table. These S-boxes are carefully selected to resist and obstruct linear and differential cryptanalysis. Thus, incorporating an S-box, a nonlinear mapping among the input and output data is established to create confusion [23], [24].
A simple and efficient technique for S-box construction, using the idea of novel transformation, modular inverse, and permutation, was inherited from the authors of [25], where an example of an S-box was evaluated and analyzed to verify its cryptographic forte using standard criteria. Consequently, its performance was examined by comparing it with other recently projected S-boxes. The research has demonstrated extensive results that meet the requirements of the benchmarks, which validate the implementation of the technique. Furthermore, the investigation transparently indicated the high efficiency of the proposed S-box by comparing it with analogs.
In our encryption scheme, we used a randomly generated s-box with a 16×16 dimension, the values of which can be seen in Table IV.
Recaman’s Sequence
In order to generate the Recaman’s sequence, we assume that a1 = 1 and follow the mathematical form demonstrated below:
where n is the position of the element in the sequence.
Fig. 3 is the 2D graphical representation for the first 200 iterations, which were used in our proposed encryption scheme to generate a key of random bits.
Image Encryption and Decryption Processes
The proposed image encryption scheme is implemented following the structure below. First, an image of appropriate dimensions is chosen, and its pixels are converted into a 1D stream of bytes. Consequently, these bytes are converted into a bit stream d. Secondly, the mean intensity of the image pixels is calculated. The resulting value is a relatively small number, which we multiply by a magnifying factor fM. Let us denote the resulting value by μ. Next, we cyclically shift d to the right by μ places and the resulting bit stream, now denoted dμ, is then XORed with kCA. kCA is the first key, a bit stream of the same length as d and dμ, consisting of a repetition of the first NCA bits resulting from the binary representation of the first 250 Rössler numbers in the Rössler attractor. Lastly, we denote the resulting bit stream as C1, which concludes the first step of the encryption.
For the next step, we utilize the S-box to substitute the decimal representation for every 8 binaries from the stream of binaries acquired after the first step as in Table IV. Consequently, we change those decimal representations to a bit stream C2. At this point, we apply the x and y coordinates of each of the points to the Recaman’s Sequence equations and flatten them into a single 1D array. Next, we list plot the mentioned values into 2D, as demonstrated in Fig. 3. Examining the plot in Fig. 3, we change those integer values to binaries.
These newly obtained bit streams of length NL would make up the seed of our Recaman’s Sequence based key. We repeat those NL bits until they are of the same length as d and C1, thus forming the second key, denoted as kL. Consequently, we XOR kL with C2 obtaining C3, which concludes the third step of the encryption. Ultimately, C3 is reshaped back into an image of the same dimensions as those of the plain image, obtaining the encrypted image. The decryption process is implemented in a reverse manner as to that of the encryption process.
Numerical Results and Performance Evaluation
The current section outlines the numerical results of the proposed lightweight image encryption scheme. Performance is evaluated and compared to counterpart algorithms found in the literature. The proposed scheme is implemented using the computer algebra system Wolfram Mathematica® with a machine running Windows 10 Enterprise, equipped with a 2.3 GHz 8-Core Intel® CoreTM i7 processor and 32 GB of 2400 MHz DDR4 of memory. The utilized keys are assigned thefollowing values: NCA = 250, NL = 200, fM = 106, and λ = 10. Four images commonly used in image processing applications/experimentation are utilized in this section. These are Lena, Mandrill, Peppers, and House with the dimensions of 256 × 256.
Fig. 4 demonstrates the correlation of the plain and encrypted Lena image coefficient diagrams. It is transparent that the horizontal, vertical, and diagonal correlation coefficients of the adjacent pixels for the plain image are linear. However, the inspection of the plots generated from the encrypted image demonstrates that the plots are uniform and have a scatter-like distribution. This notion signifies the resistance of the proposed scheme to statistical analyses or attacks.
Table III lists the computed values of MSE and PSNR of our proposed scheme, as well as its two counterparts from the literature, particularly [26] and [27]. A more significant value of the MSE signifies an improved level of security. Our proposed schemes seem to outperform the MSE values of [27], but achieve a lower performance than the one demonstrated in [26]. Since the PSNR is an inversely proportional metric to the MSE, the comparison among the three schemes in terms of PSNR still holds the same significance as mentioned before.
Conclusion and Future Works
In this paper, we proposed an image encryption scheme based on three consequent stages. The first stage incorporated the usage of the Rössler System; the second phase implemented the construction of a PRNG S-Box; the ultimate stage concerned the usage of the Recaman’s Sequence. Performance evaluation of the proposed scheme was carried out utilizing several appropriate metrics and analyses, including visual inspection of both plain and encrypted images, a histogram analysis, a cross-correlation analysis, the inspection of entropy values, and computation of the MSE and the PSNR values. Consequently, we compared the proposed image encryption scheme with its counterpart methods from the literature, demonstrating the comparable security performance of the proposed system. Ultimately, the research provides evidence for low processing time, signifying the practicality of the method and its potential usage in real-time applications. Future work is expected to result in improved performance and should include the implementation of another substitution phase between the two proposed stages of image encryption.
References
A. El Mahdy and W. Alexan, “A threshold-free llr-based scheme to minimize the ber for decode-and-forward relaying,” Wireless Personal Communications, vol. 100, no. 3, pp. 787–801, 2018.
M. I. Mihailescu and S. L. Nita, “Big data cryptography,” in Pro Cryptography and Cryptanalysis. Springer, 2021, pp. 379–400.
W. Alexan, M. El Beheiry, and O. Gamal-Eldin, “A comparative study among different mathematical sequences in 3d image steganography,” International Journal of Computing and Digital Systems, vol. 9, no. 4, pp. 545–552, 2020.
W. El-Shafai, I. M. Almomani, and A. Alkhayer, “Optical bitplane- based 3d-jst cryptography algorithm with cascaded 2d-frft encryption for efficient and secure hevc communication,” IEEE Access, vol. 9, pp. 35 004–35 026, 2021.
I. Verbauwhede, “The cost of cryptography: Is low budget possible?” in 2011 IEEE 17th International On-Line Testing Symposium, 2011, pp. 133–133.
G. De Meulenaer, F. Gosset, F.-X. Standaert, and O. Pereira, “On the energy cost of communication and cryptography in wireless sensor networks,” in 2008 IEEE International Conference on Wireless and Mobile Computing, Networking and Communications. IEEE, 2008, pp. 580–585.
H. Rifa-Pous and J. Herrera-Joancomart´ı, “Computational and energy costs of cryptographic algorithms on handheld devices,” Future internet, vol. 3, no. 1, pp. 31–48, 2011.
K. M. Hosny, Multimedia security using chaotic maps: principles and methodologies. Springer Nature, 2020, vol. 884.
S. Wolfram, A new kind of science. Wolfram media Champaign, IL, 2002, vol. 5.
C. Wen, X. Li, T. Zanotti, F. M. Puglisi, Y. Shi, F. Saiz, A. Antidormi, S. Roche, W. Zheng, X. Liang et al., “Advanced data encryption using 2d materials,” Advanced Materials, p. 2100185, 2021.
Y. Zhang, H.-P. Lo, A. Mink, T. Ikuta, T. Honjo, H. Takesue, and W. J. Munro, “A simple low-latency real-time certifiable quantum random number generator,” Nature communications, vol. 12, no. 1, pp. 1–8, 2021.
O. Rossler, “An equation for hyperchaos,” Physics Letters A, vol. 71, no. 2-3, pp. 155–157, 1979.
O. E. R¨ossler, “An equation for continuous chaos,” Physics Letters A, vol. 57, no. 5, pp. 397–398, 1976.
R. Genesio, G. Innocenti, and F. Gualdani, “A global qualitative view of bifurcations and dynamics in the r ¨ossler system,” Physics Letters A, vol. 372, no. 11, pp. 1799–1809, 2008.
G. Wang, “Chaos synchronization of discrete-time dynamic systems with a limited capacity communication channel,” Nonlinear Dynamics, vol. 63, no. 1, pp. 277–283, 2011.
G. Alvarez, F. Montoya, M. Romera, and G. Pastor, “Cryptanalysis of a discrete chaotic cryptosystem using external key,” Physics Letters A, vol. 319, no. 3-4, pp. 334–339, 2003.
A. Belazi, R. Rhouma, and S. Belghith, “A novel approach to construct s-box based on rossler system,” in 2015 international wire- less communications and mobile computing conference (IWCMC). IEEE, 2015, pp. 611–615.
S. Farrag and W. Alexan, “Secure 2d image steganography using Recaman’s sequence,” in 2019 International Conference on Advanced Communication Technologies and Networking (CommNet). IEEE, 2019, pp. 1–6.
M. F. Khan, A. Ahmed, and K. Saleem. “A novel cryptographic substitution box design using Gaussian distribution,” IEEE Access, vol. 7, 2019, pp. 15999-16007.
M. Ahmad, H. Chugh, A. Goel, and P. Singla, “A chaos based method for efficient cryptographic s-box design,” in International Symposium on Security in Computing and Communication. Springer, 2013, pp. 130–137.
V. M. Silva-Garcia, R. Flores-Carapia, C. Renteria-Marquez, B. Luna-Benoso, and M. Aldape-Perez. “Substitution box generation using Chaos: An image encryption application,” Applied Mathematics and Computation, vol. 332, 2018, pp. 123-135.
C. E. Shannon, “Communication theory of secrecy systems,” The Bell system technical journal, vol. 28, no. 4, pp. 656–715, 1949.
E. Tanyildizi and F. Ozkaynak, “A new chaotic s-box generation method using parameter optimization of one dimensional chaotic maps,” IEEE Access, vol. 7, pp. 117 829–117 838, 2019.
M. Ahmad, E. Al-Solami, A. M. Alghamdi, and M. A. Yousaf, “Bijective s-boxes method using improved chaotic map-based heuristic search and algebraic group structures,” IEEE Access, vol. 8, pp. 110 397–110 411, 2020.
A. H. Zahid, E. Al-Solami, and M. Ahmad, “A novel modular approach based substitution-box design for image encryption,” IEEE Access, vol. 8, pp. 150 326–150 340, 2020.
M. Khan and F. Masood, “A novel chaotic image encryption technique based on multiple discrete dynamical maps,” Multimedia Tools and Applications, vol. 78, no. 18, pp. 26 203–26 222, 2019.
I. Younas and M. Khan, “A new efficient digital image encryption based on inverse left almost semi group and lorenz chaotic system,” Entropy, vol. 20, no. 12, p. 913, 2018.
Footnotes
A. El Mahdy and W. Alexan, “A threshold-free llr-based scheme to minimize the ber for decode-and-forward relaying,” Wireless Personal Communications, vol. 100, no. 3, pp. 787–801, 2018.
M. I. Mihailescu and S. L. Nita, “Big data cryptography,” in Pro Cryptography and Cryptanalysis. Springer, 2021, pp. 379–400.
W. Alexan, M. El Beheiry, and O. Gamal-Eldin, “A comparative study among different mathematical sequences in 3d image steganography,” International Journal of Computing and Digital Systems, vol. 9, no. 4, pp. 545–552, 2020.
W. El-Shafai, I. M. Almomani, and A. Alkhayer, “Optical bitplane- based 3d-jst cryptography algorithm with cascaded 2d-frft encryption for efficient and secure hevc communication,” IEEE Access, vol. 9, pp. 35 004–35 026, 2021.
I. Verbauwhede, “The cost of cryptography: Is low budget possible?” in 2011 IEEE 17th International On-Line Testing Symposium, 2011, pp. 133–133.
G. De Meulenaer, F. Gosset, F.-X. Standaert, and O. Pereira, “On the energy cost of communication and cryptography in wireless sensor networks,” in 2008 IEEE International Conference on Wireless and Mobile Computing, Networking and Communications. IEEE, 2008, pp. 580–585.
H. Rifa-Pous and J. Herrera-Joancomart´ı, “Computational and energy costs of cryptographic algorithms on handheld devices,” Future internet, vol. 3, no. 1, pp. 31–48, 2011.
K. M. Hosny, Multimedia security using chaotic maps: principles and methodologies. Springer Nature, 2020, vol. 884.
S. Wolfram, A new kind of science. Wolfram media Champaign, IL, 2002, vol. 5.
C. Wen, X. Li, T. Zanotti, F. M. Puglisi, Y. Shi, F. Saiz, A. Antidormi, S. Roche, W. Zheng, X. Liang et al., “Advanced data encryption using 2d materials,” Advanced Materials, p. 2100185, 2021.
Y. Zhang, H.-P. Lo, A. Mink, T. Ikuta, T. Honjo, H. Takesue, and W. J. Munro, “A simple low-latency real-time certifiable quantum random number generator,” Nature communications, vol. 12, no. 1, pp. 1–8, 2021.
O. Rossler, “An equation for hyperchaos,” Physics Letters A, vol. 71, no. 2-3, pp. 155–157, 1979.
O. E. R¨ossler, “An equation for continuous chaos,” Physics Letters A, vol. 57, no. 5, pp. 397–398, 1976.
R. Genesio, G. Innocenti, and F. Gualdani, “A global qualitative view of bifurcations and dynamics in the r ¨ossler system,” Physics Letters A, vol. 372, no. 11, pp. 1799–1809, 2008.
G. Wang, “Chaos synchronization of discrete-time dynamic systems with a limited capacity communication channel,” Nonlinear Dynamics, vol. 63, no. 1, pp. 277–283, 2011.
G. Alvarez, F. Montoya, M. Romera, and G. Pastor, “Cryptanalysis of a discrete chaotic cryptosystem using external key,” Physics Letters A, vol. 319, no. 3-4, pp. 334–339, 2003.
A. Belazi, R. Rhouma, and S. Belghith, “A novel approach to construct s-box based on rossler system,” in 2015 international wire- less communications and mobile computing conference (IWCMC). IEEE, 2015, pp. 611–615.
S. Farrag and W. Alexan, “Secure 2d image steganography using Recaman’s sequence,” in 2019 International Conference on Advanced Communication Technologies and Networking (CommNet). IEEE, 2019, pp. 1–6.
M. F. Khan, A. Ahmed, and K. Saleem. “A novel cryptographic substitution box design using Gaussian distribution,” IEEE Access, vol. 7, 2019, pp. 15999-16007.
M. Ahmad, H. Chugh, A. Goel, and P. Singla, “A chaos based method for efficient cryptographic s-box design,” in International Symposium on Security in Computing and Communication. Springer, 2013, pp. 130–137.
V. M. Silva-Garcia, R. Flores-Carapia, C. Renteria-Marquez, B. Luna-Benoso, and M. Aldape-Perez. “Substitution box generation using Chaos: An image encryption application,” Applied Mathematics and Computation, vol. 332, 2018, pp. 123-135.
C. E. Shannon, “Communication theory of secrecy systems,” The Bell system technical journal, vol. 28, no. 4, pp. 656–715, 1949.
E. Tanyildizi and F. Ozkaynak, “A new chaotic s-box generation method using parameter optimization of one dimensional chaotic maps,” IEEE Access, vol. 7, pp. 117 829–117 838, 2019.
M. Ahmad, E. Al-Solami, A. M. Alghamdi, and M. A. Yousaf, “Bijective s-boxes method using improved chaotic map-based heuristic search and algebraic group structures,” IEEE Access, vol. 8, pp. 110 397–110 411, 2020.
A. H. Zahid, E. Al-Solami, and M. Ahmad, “A novel modular approach based substitution-box design for image encryption,” IEEE Access, vol. 8, pp. 150 326–150 340, 2020.
M. Khan and F. Masood, “A novel chaotic image encryption technique based on multiple discrete dynamical maps,” Multimedia Tools and Applications, vol. 78, no. 18, pp. 26 203–26 222, 2019.
I. Younas and M. Khan, “A new efficient digital image encryption based on inverse left almost semi group and lorenz chaotic system,” Entropy, vol. 20, no. 12, p. 913, 2018.
Encrypting information to ensure it can only be read by authorized parties is a concept that has been applied throughout history and has become a crucial part of today’s information technology and security. Most algorithms take an arbitrary string of characters, called a key, and use it, along with another algorithm, called a hash function, to transform a message into a seemingly random string. Larger key sizes correspond to more secure encryption, with sizes between 128 and 256 bits generally being used in modern applications.
Cryptographic keys are generally created through a secure random number generation algorithm, which receives its primary data from some non-deterministic source to output numerical data. This approach ensures that the output is as close to true randomness as possible, while regular computer random number generation algorithms use more predictable sources, such as the system clock, as input. Finally, symmetric encryption algorithms use a single key for both encryption and decryption (Manico & Detlefsen, 2014). Asymmetric algorithms use different keys for encryption (public key) and decryption (private key), thus allowing users to receive encrypted messages without exposing a means of decrypting them (Manico & Detlefsen, 2014). Modern encryption algorithms are sufficiently secure to be infeasible to defeat without access to the key; while theoretically possible, it would require more than a human’s lifetime to do with the most advanced hardware. Vulnerabilities can allow unauthorized decryption in a sufficiently short time to be feasible. The algorithms currently in use as standard have no known vulnerabilities.
Encryption Algorithm Recommendation
The choice of an encryption algorithm for long-term file storage involves certain security considerations. Archival does not imply transferal or modification of files; encryption will be applied to files to prevent unauthorized parties from accessing their contents. This data may have to be read at an unknown future date, meaning that the encryption should be fully reversible. Besides unauthorized access, this reversibility presents another potential risk: unauthorized alterations to the archived files. To mitigate this threat, encrypted files should be signed, making it obvious if they were changed between their initial encryption and later access (Manico & Detlefsen, 2014). Since the enciphered data does not need to be transferred or used by an entity other than Artemis Financial, there is no need to use an asymmetric algorithm.
Current government regulations primarily require that confidential information is secure without mandating the use of specific measures. Encryption, if not explicitly required by regulation, is often listed as a suggested solution to data security. Examples of such legislation are the Federal Trade Commission’s Standards for Safeguarding Customer Information and the European Union’s General Data Protection Regulation (European Union Agency for Fundamental Rights and Council of Europe, 2018) Federal Trade Commission, 2019). By encrypting its archives, Artemis Financial complies with such regulations.
Based on these considerations, the advanced encryption standard (AES) is the best option. It is the generally accepted standard, meaning that if a vulnerability is discovered, it will be publicized quickly. AES is a symmetric algorithm, meaning that the same cryptographic key is used for enciphering and deciphering data. Although symmetric algorithms can be viewed as less secure than asymmetric ones, it is not a critical difference when the encrypted data is not intended for transfer. Symmetric algorithms are faster than asymmetric ones, but this difference is not relevant for archival. Similarly, while a key size of 128 bits makes the time required to crack AES infeasible, larger keys can be used as a form of future-proofing at the expense of longer encryption and decryption times.
References
European Union Agency for Fundamental Rights and Council of Europe (2018). Handbook on European data protection law.
Federal Trade Commission (2019). Standards for safeguarding customer information. Federal Register, 84(65).
Manico, J., & Detlefsen, A. (2014). Iron-Clad Java. Oracle Press.
Emerging technologies have globalized trading and communication systems. This has had an overwhelming effect on the world as most businesses re-align to implement e-commerce. These transactions and communication networks need protection against unauthorized access. In this regard, several methods have been established to safeguard online transactions as well as personal identification details. Most of these methods employ encryption to safeguard their online activities. Security is essential in protecting passwords, private communications, online payments as well as safe communications, among others. Cryptography, which refers to the science of writing using secret codes, originated in ancient Egypt as inscriptions. Its use has been considered a protective measure in modern technology where communications are transmitted through unsafe media. It has been employed for authentication, integrity, Non-repudiation, and private purposes. There are different types of cryptographic algorithms such as hash functions, private and public-key cryptography. This paper will explore the definition, structure, and the use of Rivest, Shamir, and Adleman’s (RSA) public key cryptographic algorithm (Rivest, Shamir & Adleman, 1978).
Definition
RSA which is derived from its inventor’s names Rivest, Shamir, and Adleman, emerged in 1977 and has since been used for security purposes. It was the first type of algorithm to be used in signatures. Its use has been far and wide and this enabled it to be patent-free since the year 2000. Since its invention, RSA has been tried in several security implementation including digital signatures and encryption, among others. It is structured to apply factorization as its security tool and this makes it comparatively easy to use and understand. For this reason, RSA is the most extensively used algorithmic method for online security purposes. Its use has also extended to IP data, e-mail, conferencing services, transport data and many more.
Theory
Prior to its invention in 1977 by the three MIT scholars Rivest, Shamir and Adleman, Clifford Cocks had tried to propose its implementation for the UK intelligence agency in 1973. However, this did not succeed as the computers required for this exercise were very expensive. When RSA was invented in 1977, and its implementation described in 1983 in MIT, the institution was granted a 17-year patent which was to expire in 2000, however, a public patent was declared by the RSA security that year. RSA employs the use of trap-door ciphers, where encoding and decoding of secret keys are done. This has made it easy to decode keys. The method allows those who have decoding keys to generate encoding ones and also restricts the generation of decoding keys from encoding, this way; it protects information. RSA encryption employs the Cipher method in its key generation (Rivest, Shamir & Adleman, 1978).
RSA algorithm involves the use of three stages or processes. These include key generation, then encryption, and finally decryption. RSA can use a public key or private key. The use of Public key is usually employed when dealing with encryption of messages while private decryption is for privacy and security purposes (Ireland, 2011, p. 1).
Key Generation
This method entails the use of a public key for encryption purposes. These are done as follows:
Choosing 2 discrete prime numbers, for instance, p and q. These numbers are supposed to be chosen randomly to enhance security details; this is usually done using primality tests. Care should also be taken to ensure that these numbers have the same bit length.
The second step involves the computation of numbers above for instance, from the above example, we shall introduce a new number, n = pq, where n is the modulus of the two numbers.
This step involves further computation using Euler’s function as shown: (n) = (p – 1) (q – 1).
The fourth step involves choosing an integer e, such that (n) < e < 1 and also the equations are co-prime, that is, their ((n), e) gcd is 1. In that case, e is released as the exponent of the public key. In most cases bit-length of e determines their security levels, for example, very short values such as 3 have proved to be less secure than the others.
The next step involves finding the private key exponent, usually denoted as d; this is usually computed by use of an extended Euclidean algorithm. D is computed from e as follows: d = e-1 mod (n);
Therefore, to obtain the public key, one needs the public exponent e, and modulus n, while to obtain private key, private exponent d must be determined. The latter must be kept secret (RSA Security, 2011, p. 1).
Encryption stage
In this process the public key is transmitted while private key is kept secret. For instance, If George transmits a message X to Mary, and Mary transmits her encryption key say (n, e) and keeps the private key d, then George should change X into x, where x is an integer. x should be such that 0 < x < n using padding scheme that they have agreed on. George will then compute the cipher-text which corresponds to c = x e (mod n), using exponential squaring (Riikonen, 2002).
Decryption stage
This stage involves computing the result in encryption to obtain decryption exponent d. using the case above; the following method can be employed. X = c d (mod n) meaning that when x is provided, then Mary can obtain X by reversing the initially agreed padding scheme (Davis, 2003, p. 1).
Mathematics
RSA algorithm entails the use of several mathematical theorems like, Fermat’s little and extension theorems, as well as the Chinese remainder theorem. These theorems are used to enable key generation, encryption, decryption so as to ascertain the use of public key and private key. The following calculations are involved in RSA algorithm (Menez, et. al., 2002).
It starts by selecting random prime numbers, i.e. p and q in which confirmation can be made by p! = q. modulus n, is then computed as n = pq and (n) = (p – 1) (q – 1). Public exponent is given as e, such that (n) < e < 1 and gcd (e,(n)) = 1. These are used to compute d = e-1 mod (n);Thus giving public key as and private key as d. Encryption would therefore be c = m e mod n, while decryption would be m = c d mod n. For digital signatures s = H (m) d mod n while its verification is m’ = s e mod n, and is only correct when m’ = H (m), where H is a hash function (Coppersmith, 1997, p. 22).
For example, if p =47 and q = 71;
Then n = 3337;
(n) = 46 * 70 = 3220;
By letting e = 79, then d = 79 -1 mod 3220 = 1019
Therefore, from the formulae above, public key = n and e, while private key = d.
Also, by discarding p and q, we shall have;
Encrypt message m = 688, and hence c = 688 79 modulus 3337 = 1570.
Decrypt message shall be c = 1570. And m =1570 1019 modulus 3337 which gives m =688.
How it works
RSA uses factorization method to enhance its security; in addition it employs the use of RSAP, which is known as RSA problem. RSA problem ensures that RSA encryption is safeguarded. This ensures that only one number exists in the field. Factorization takes advantage of the fact that factoring large numbers is usually difficult in ensuring security. This can be done very fast, in fact quicker than brute forcing. Implementation of RSA involves the use of tools such as Arbitrary (multiple) precision arithmetic, prime number generator as well as the PRNG (Pseudo Random Number Generator).
The algorithm is based on three theorems, Fermat’s little and extension theories as well as the Chinese remainder theorem. These theories form the basis of RSA and are used to obtain decryption.
Fermat’s Little Theorem
This theory states that, if say p is a prime number, and m is an integer,
Then m p-1 =1(mod p). If (a, p) = 1.
Fermat’s Extension Theorem
In this method, if (a, p) = 1,
Then a (m) = 1(mod m),
Where (m) gives the digits less than m, and also prime to m
Chinese remainder theorem
In this theorem if say (p, q) = 1, and may not be prime numbers,
Then a = b (mod p);
And a = b (mod q),
Then a = b (mod pq).
Conclusion
In summary, the following processes are followed in RSA algorithm, Key generation, which involves selection of random prime numbers say, p and q in which confirmation can be made by p! = q. Modulus n, is then computed as n = pq and (n) = (p – 1) (q – 1). Public exponent is given as e, such that (n) < e < 1 and gcd (e, (n)) = 1
These are used to compute d = e-1 mod (n);Thus giving public key as and private key as d. Encryption would therefore be c = m e mod n, while decryption would be m = c d mod n. For digital signatures s = H (m) d mod n while its verification is m’ = s e mod n, and is only correct when m’ = H (m), where H is a hash function (Agrawal, et. al., (n.d)).
Reference List
Agrawal, M., et. al. (n.d). Primes is in P. Web.
Coppersmith, D. (1997). Small Solutions to Polynomial Equations, and Low Exponent RSA Vulnerabilities. Journal of Cryptology, v. 10, n. 4.
Digital technology has revolutionized the way human beings and corporations pursue their economic goals. Since the 1940s, the world has relied on emerging systems to produce information that could be coded and decoded depending on the goals of the user. As the level of data application and access increased, the demand for encryption has grown in an attempt to make information on different platforms and databases more secure. This trend led to the creation of the Data Encryption Standard (DES) in the 1970s. Such an idea was founded on the original algorithm that Horst Feistel had created five years earlier (Sivakumar et al., 2017). IBM managed to develop more advanced encryption for the National Bureau of Standards. Consequently, DES became a superior version developed using differential cryptanalysis (Sivakumar et al., 2017). This creation makes it a symmetric-key algorithm that supports the encryption and subsequent protection of digital information or data (Daimary & Saikia, 2015). It has a short key characterized by 56 bits.
Before the development of DES, a hardware security module (HMS) was the common method for protecting devices both offline and online. This innovation relied on the power of a personal identification number (PIN) to grant access. Since this technology was owned by a single person named Mohamed Atalla, it became necessary for different institutions and stakeholders to consider the need for a standardized model that led to the development of the DES standard (Daimary & Saikia, 2015). This encryption method works by relying on a similar key to decrypt or encrypt the intended message. The approach means that the sender and the reader or receiver need to be in possession or aware of the private key (Sivakumar et al., 2017). It fits in with the prior technology since it only added additional features to maximize security through a better and advanced encryption strategy.
Strengths
The DES model presents several strengths or advantages that make it an effective encryption technology. First, it is founded on a 56-bit key, something that makes it a relatively secure model (Daimary & Saikia, 2015). The argument behind this idea is that it would take hackers or criminals a long time before they guess or enter the right key to access the safeguarded data (Sivakumar et al., 2017). Second, the notion of having the same function that only needs reversal during decryption makes it convenient for hardware and software match or configuration (Sivakumar et al., 2017). Computer technologists would, therefore, find the innovative idea easy to apply in a wide range of scenarios or settings. Third, triple-DES improves the security level of the original algorithm, thereby increasing the security level.
Weaknesses
Although DES has been in use for many years, it presents specific challenges and weaknesses that make it inappropriate in the modern-day technological world. For instance, it has short keys of 56 bits while advanced ones have more security attributes and long blocks (Princy, 2015). This issue means that it would apply to different ciphers using the same 56-bit key (Princy, 2015). Due to the nature of this drawback, the security risk of DES has continued to increase significantly. Although this technology lacks design or functional flaws, it remains inadequate in providing the relevant defense against hacking, replication, or unauthorized access. This challenge exists because it relies on short keys (Princy, 2015). The division of the plaintext block becomes another aspect that makes the DES system insecure.
In terms of performance, the encryption model is fast but incapable of maximizing protection. Users have been focusing on this gap to consider or introduce a superior system of encryption that is capable of delivering positive results (Hameed et al., 2018). Using the total cost of ownership (TCO), it is quite clear that the buyer or user of DES will be unable to get value for money after installing it for data privacy. The reasoning behind this argument is that the chances of unauthorized access increases significantly (Princy, 2015). The cost of deployment (COD) guides companies to understand the connection between a specific system and the overall ability to provide long-term financial gains (Princy, 2015). The DES encryption technology might have a low COD in comparison with modern systems, such as the Advanced Encryption System (AES). Finally, those who want to implement the DES model in their systems will be disadvantaged because more companies and institutions are pursuing superior methods to support data decryption and encryption.
Opportunities
The nature of DES technology means that it presents various opportunities to those who rely on it for security purposes. The first outstanding one is that workers or employees would require additional training to be able to utilize it successfully and pursue their outlined organizational goals. The second opportunity is that more companies and institutions can capitalize on it to safeguard data without incurring unnecessary expenses (Patil et al., 2016). The third possible consideration is that many agencies or firms that rely on this technology will have to establish the right infrastructure. This achievement can prepare them for additional changes or replacements in the future. Such organizations will not spend more financial resources trying to acquire new systems to support the superior data encryption, innovation model. The fourth outstanding opportunity is that DES would still be relevant for emerging businesses and small firms shortly (Hameed et al., 2018). The ease of implementation, the reduced cost, and the ability to provide the intended security measures explain why such a prospect exists. Additionally, business organizations planning to sell their systems would encounter minimum challenges. Such an outcome is possible since DES remains applicable and recognizable in a wide range of industries or sectors across the globe.
Threats
Modern technologies and innovations have been changing very fast due to the power of research and development (R&D). DES has continued to suffer a similar fate as more companies and users continue to consider emerging systems that can provide the best security. The weaknesses associated with it have led to the innovation and implementation of the AES. The use of 56 bits keys makes it vulnerable and ineffective for confidential data. Hackers, phishers, and programmers can break or access it much faster. The leading security issues include: easy to compromise, weak block, and brute-force attacks (Patil et al., 2016). The major legal concern is that firms that implement DES stand a chance to be sued if hackers access private data and use it for malicious purposes.
Additionally, the established data privacy standards in many regions no longer recognize DES as an effective system (Alemami et al., 2019). Such a gap explains why many countries have gone further to consider the effectiveness of the AES system (Sivakumar et al., 2017). This trend is possible since AES has better features and is incapable of being breached (see Figure 1). This encryption method encounters numerous deployment concerns since a smaller number of businesses and organizations rely on it (Patil et al., 2016). This gap makes it impossible for users to link their systems and achieve their goals. Due to the nature of this threat, many individuals and leaders have been keen to identify and implement advanced encryption systems that can be deployed at the international level.
Summary: SWOT Analysis
STRENGTHS
The 56 bits key remains relatively secure
DES has no functional problems or challenges
Hackers would take a long to gain entry into a given system or database
It can be upgraded or improved to Triple-DES
It remains affordable and easy to implement
WEAKNESSES
Short key of 56 bits make it insecure and prone to attacks
DES is a common target for hackers due to the nature of its plaintext block
It fails to provide the intended aims or protection goals
DES does not provide the required or expected value for money
Its COD is extremely unsustainable for emerging companies
The global society is changing or focusing on superior encryption technologies and systems
OPPORTUNITIES
Minimum adoption costs and requirements make it popular
Emerging technologies to improve DES
Emerging businesses that need DES
Existing infrastructure is an opportunity for technological or systems update
Opportunity-Strength (OS) Strategies
Technologists need to rely on the original DES model to develop a superior encryption system that meets the needs of more users (S12, O12).
The new system can be marketed to different certification and standardization agencies to become an acceptable model for the whole world (S45, T345, O34
Opportunity-Weakness (OW) Strategies
Engineers need to use emerging technologies to create a new encryption system with longer bits (O12, W1)
Engineers need to engage in additional research study to improve the integrity of the new encryption system or technology (O2, W36)
The new system can be made available to more firms and users to improve acceptance (O24, W25)
THREATS
Changing technologies
Emergence of AES
Legal challenges from lawsuits
DES vulnerability forces users to consider other solutions
Emerging privacy/encryption requirements or standards
Deployment concerns and gaps
Threat-Strength (TS)Strategies
Technologists need to identify new ways of making the introduced version acceptable by delivering the required security solutions (T34, S12)
A superior marketing strategy and demonstration will make this new version of DES reliable (T6, S3)
Threat-Weakness (TW) Strategies
Stakeholders need to ensure that the launched DES version is capable of providing the desired security levels and while being affordable (T245, W34)
Conclusion
The above discussion has identified DES as a powerful encryption technology that has been in use for decades. While it delivers the intended security measures, it still presents several weaknesses and threats that have led to the introduction and implementation of the AES as the acceptable global standard. The outlined summary can guide different professionals and engineers to focus on the strengths of DES and capitalize on the existing opportunities to develop a superior encryption system that can compete successfully in the market with AES. Such a move will encourage more people to implement the advanced version and eventually achieve the anticipated business goals.
References
Alemami, Y., Mohamed, M. A., & Atiewi, S. (2019). Research on various cryptography techniques. International Journal of Recent Technology and Engineering, 8(2S3), 395-405. Web.