Brennan, the CEO and the chairman of Vanguard, initiated a joint force in the development of a massive integration project of a single web portal that was meant to replace siloed systems and databases. This project was motivated by the need to give customers an excellent and seamless service.
After the development of the Vanguard web in 1998, customers were able to use the web to do a number of activities which included opening of new accounts, purchase and redemption of shares. These services made most customers shift to the web in managing their portfolios. Currently, the Vanguard website investors invest more as compared to those customers who are not online and the cost of serving the online customers is low (Anon, N.d).
After sometime, during the use of the Vanguard website, the managing director noted a problem with the employees web interface and proposed the employees use the same web interface with the customers. The use of the same web interface between the customers and the employees produced good results in the seamless customer service and the channel parity. This was then enhanced with the automatic system which required little human intervention. The use of the same interfaces, both internally and externally, made it possible for the client and the crew members to be on the same foothold in their operations. These developments made the employees to have more time to concentrate on investment issues.
The use of web interface in the corporate portal changed the need to train the customer because the interface developed was intuitive. This gave the staff more time to train their clients on investment. The systems also helped Vanguard to handle many customer calls and improve its services.
The employees interface developed was simplified such that the employees used a summary of customers transactions and the tools which fed the employees with required information and services needed by a given customer. This made the life of employees easier and their work simplified. The Vanguard.com interface was then expanded through buying and integrating the third-party CRM system, a thing that turned the confirmation page into power user page, and this simplified the internal system maintenance (Anon, N.d).
The most important benefit that accrues from using web interface is the straight through processing which is built on vanguard.com. To enhance these benefits, the Buckeys team developed tools that had standard rules at the point of data entry; this was inline with expanding the Vanguard.com site. This enabled the data entered by employees to go straight through instead of triggering a manual process. This process reduces the cost of labor making it a cheap way to conduct business.
Vanguard team introduced the transaction capability on Vanguard.com, which was the beginning of the company enterprise data base. This enabled the customer data to be stored in 10 different points. This enabled the IT team to create a comprehensive customer data base (Anon, N.d).
Statement of the problem
The problem facing Vanguard team is gauging the financial viability of current investment project meant to improve the customer service, which is said to be costly and may affect the viability of the company financially.
Solution
The company needs to undertake a cost benefit analysis before investing much on the project and this will prove the viability of the project to the organization.
This kind of solution will work because it will enable the manager to gauge the various options available in that kind of project and also list the impediments that the project is likely to face in its implementation (Riley, 2006).
Reference List
Anon. (N.d). All for One View. M626 case study week 3
The era of high technological innovative inventions needs rather new approaches towards the security systems of technological implementation. Their versatile nature and current standards still adhere to the logical explanation as for business and legal arrangement of it. The asset management should point out all the expenditures about the main suppliances needed.
The organization under analysis is a company providing services of medical support within two cities being situated close to each other. The age of it is almost ten years and the staff is divided into several departments, including private clinics in the suburb between two cities. The total amount of employees contains 172 persons of straightforward medical workers and 34 persons working as technical employees. A special role is dedicated to the IT department which comprises 8 persons dealing with the modernization and support of current software in order to promote the best quality of medical services throughout means of communication and security systems. Living in the post-industrial society means providing people with a better connection. Medicare service should be as immediate as possible, so the modern technological decisions are core elements for the successful functioning of the healthcare system.
Organizational risk is an aggregate factor and must be determined collectively by all of the information owners within and throughout the organization. (Freeman, p. 234) That is why the policy of the company is according to the usage of informational technologies and assets which they encompass. First of all, the management team should point out probable risks and personal responsibility of the needful and appropriate implementations of technological implications. Current practice shows strict and rather fair competitiveness between representatives of Medicare struggling for better services maintained and evaluated out of the current and potential customers feedbacks. In this prospect, it is useful to realize the extent of policy points. Alan Calder in his book directly assumes the following way of steps before working out the security policy as of informational assets:
An information security policy answers the four key questions: who, where, what and why? Who is responsible for information security in the organization? To which parts of the organization does the policy apply? What are we required to do? And why are we required to do it? (Calder, p. 56).
Consideration of needs
The cabinets of medical workers should be supported with software that does not lag behind current technologies. Though the assets about equipping the company concern the following points:
filing cabinets and stores containing paper records
computer databases
data files and folders
software licenses
physical assets (computer equipment and accessories, PDAs, cell phones)
key services
key people
intangible assets such as reputation and brand (Asset Management, Ch. 12)
Concrete demands
Every employee is to be instructed before the start of official working activities with regards to basic law. Every piece of information is stated to be attached to a definite employee so that to protect the organization from information leaks. That is why all representatives should be aware of their responsible attitude towards the documents and other supported informational sources they create. The seniority of a staff member determines more responsibility in terms of information ownership policy. The level of protection about different information assets should follow the criteria schedule of their classification in order to designate whether the information is of personal character or confidential by its content. (Asset Management, Ch. 12) Thus, accountability and responsibility are the factors that provide the personnel with a conscientious attitude gained before during and after official employment. Also, employees are controlled due to the IT department in their use of other than are required as for direct profession credibility web sites and Internet services. All sites of entertaining characters are prohibited and banned inside the company. E-mailing is required in a local net which comprises all subsidiaries of the company in different locations including call-center and technical service branches.
Ethical, moral, and legal implications
Looking at the three sides of an employees personal following the prescriptions of a standardized and approved policy about asset management regarding informational systems and their technological implications one should carry the ethical, moral, and legal aspects about the companys promotion of security protection. From the ethical point of view, employees should follow the appropriate way of communication with customers and within the personnel as well. The point of self-estimation and reliable attitude of an employer towards the companys policy rises above all. This is the standpoint that prevents on the entry-level any intentional or unintentional attempt of careless employees to violate the rules according to which the company is acting and providing its services. This also concerns somehow the moral part of the issue. For example, the many-faceted nature of Internet resources tends to make medical workers be engaged in sharing any sort of information forbidden in accordance with morality and current law guidelines.
If the above-mentioned factors of attitude towards the policy of the company do not tend employees to follow them straightforwardly, in this case, the power of law dots all the is. The Electronic Communications Privacy Act (ECPA), which was reformulated by Congress in 986, imposes liability on any individual who intentionally intercepts, endeavors to intercept, or procures any person to intercept or endeavor to intercept, any wire, oral, or electronic communication. (Brennan, p. 84) Moreover, the traditional right to privacy also gives a base for the protection of informational technologies and data used within the company. The concept of privacy adopted and widely realized in the United States is a preventing prospect as of the criminal responsibility for those who violate it. Companies are working in different spheres of life activities are provided also with the business extension exception. (Brennan, 84) According to it, network providers are required to use electronic communications in the specific time frame of appropriate use when:
the intercepting device is part of the communications network;
the device is used in the ordinary course of business. (Brennan, p. 84).
All precedents of every known attempt to violate the security policy in this or that company are stated in the law base accordingly. This gives hope for further consideration of every such try of an employee. The entire attitudinal harmony still contributes to the perpetual well-being of the company. To protect themselves from legal liabilities, health care organizations need to show due diligence in attempting to implement best practices in this regard. (Freeman, p. 234).
Conclusion
Thus, in accordance with the IT security policy, the activity of the company dealing with Medicare conduct of services is aimed to keep a strict eye on the employees interactive relationships by means of informational technologies and devices. The ethical, moral, and finally legal bases for the purpose of policys protection are greatly developed in order to provide a companys successful functioning.
Reference
Assets and Information Systems Strategic Plan 2007-2011. Healthcare Practioner Registration Board. Queensland Government.
Brennan, Linda L., Johnson, Victoria Elizabeth. (2003) Social, ethical and policy implications of information technology. Idea Group Inc (IGI)
Calder, Alan. (2005) A business guide to information security: how to protect your companys IT assets, reduce risks and understand the law. Kogan Page Publishers
Freeman, Lee, Peace, Graham. (2005) Information ethics: privacy and intellectual property. Idea Group Inc (IGI)
First of all, it is necessary to mention that network security is regarded to be a rather complicated issue, which can be originally managed and controlled only by experienced IT specialists. Still, with the increase of internet mobility and accessibility, and with the essential increase of wired and wireless communication users, people are obliged to know at least the basics of network security. Despite the fact, that the main actions, aimed at improving the security level, will be performed by the system administrator of the PC network, all the users are obliged to do everything possible to prevent virus or hacking attacks, as well as information and data leakage.
Security Improvement
To begin with, it should be stated that the concepts of network security and information security are similar, and they are often used interchangeably. Still, there is a difference in approaches: network security is the defense from the outside attacks (e.g. black hat hackers, script kiddies, etc.), and information security presupposes the inward defense (negligence of users, data loss, mistakes made by users, etc). Dean (2005) in her guide states the following: One response to this insider threat in network security is to compartmentalize large networks so that an employee would have to cross an internal boundary and be authenticated when they try to access privileged information. Information security is explicitly concerned with all aspects of protecting information resources, including network security and DLP.
Taking into account, that the upgrade of security system is the project, that should be properly managed, it is necessary to emphasize, that the actions that should be performed for upgrading the security level of the IT network should be the following:
A strong firewall and proxy should be adjusted for keeping unwanted people out.
A strong Antivirus software package and Internet Security Software package are the most important part of the security mechanism.
All the users (employees) should use strong passwords and change them regularly.
If a wireless connection is used, there is a strong necessity to use a complicated password.
Physical security actions should be undertaken to restrict access to the material part of the IT network.
Prepare a network analyzer or network monitor and use it when needed.
Implement physical security management like closed-circuit television for entry areas and restricted zones.
Security fencing to mark the companys perimeter.
Fire extinguishers for fire-sensitive areas like server rooms and security rooms.
Security guards can help to maximize security (Flynn & Kahn, 2003).
From this point of view, it should be stated that project management techniques would be rather helpful for incorporating this project. Originally, the enlisted actions and recommendations will be helpful only if they are properly planned, and the whole personnel of the company take it seriously, putting aside any representation of negligence.
Conclusion
Finally, it is necessary to mention that project management of security improvement techniques should be divided into two parts: network security and information security. All the actions, directed towards granting security should not touch the working process of the personnel, still, their interests should be completely ignored, as confidential information is much more expensive.
References
Dean, T. (2005) Network+ Guide to Networks Course Technology publishing
Flynn, N., & Kahn, R. (2003). E-Mail Rules: A Business Guide to Managing Policies, Security, and Legal Issues for E-Mail and Digital Communications. New York: AMACOM
Researching smartphone subscription packages is like hunting the proverbial moving target because there are fundamental service factors, core product features, and add-on capabilities to delight a variety of consumer wants.
Brand
NOKIA
PALM
BlackBerry
LG
Samsung
HTC
Apple
Model
E72
Pixi +
Curve 8900
Expo
Nexus One
iPhone
Carrier Service*
Sprint Nextel
Verizon
T Mobile
T Mobile
Verizon
Sprint Nextel
AT & T
Dead zones
Y/N
Y/N
Y/N
Y/N
Y/N
Y/N
Yes
Dropped calls
Y/N
Y/N
Y/N
Y/N
Y/N
Y/N
Yes
Late transmission of messages or files or not sent
Y/N
Y/N
Y/N
Y/N
Y/N
Y/N
Y/N
Subscription fee
Unit locked to provider
Yes
Yes
Yes
Yes
Yes
No
Yes
Basic Criteria
Unit Price*
Less than $50
$600 or more
$99
Less than $50
$600 or more
Less than $50
$599
Communication band
CDMA to GSM 1900
CDMA to GSM 1900
CDMA to GSM 1900
CDMA to GSM 1900
CDMA to GSM 1900
CDMA to GSM 1900
CDMA to GSM 1900
Value-added Criteria
Keypad type
QWERTY
QWERTY
QWERTY
QWERTY
QWERTY
QWERTY
QWERTY
RAM size
8 to >288 MB
8 to >288 MB
8 to >288 MB
8 to >288 MB
8 to >288 MB
8 to >288 MB
8 to >288 MB
Weight
3rd Generation
Yes
Y/N
No
Yes
Yes
Yes
Yes
Performance speed
Medium to high
Medium to high
Medium to high
Medium to high
Medium to high
Medium to high
Medium to high
GPS
Y/N
Y/N
Y/N
Y/N
Y/N
Y/N
Yes
Navigation system
Y/N
Y/N
Y/N
Y/N
Y/N
Y/N
Y/N
Games
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Downloadable apps
Y/N
Y/N
Y/N
Y/N
Y/N
Y/N
Thousands
Operating system
Symbian
Microsoft
Microsoft
Microsoft
Microsoft
Microsoft
Microsoft
* List entries merely show range of possibilities since there are too many plans to break out in this
Preliminary Investigation grid.
Where the mobile phone telcos are concerned, the fundamentals have to do with cell coverage, radiated signal power, and connection stability. Even the best mobile unit in the universe will steadily lose customers if the carrier plagues subscribers with dead spots outdoors (a function of the number of cell sites covering a metropolitan area completely), weak signals that cannot penetrate through building walls and down to basements, and such poor capacity that users experience dropped calls or are unreachable.
To an extent, high-gain antennas can offset uneven network coverage. These and other advanced features large memory capacity, replaceable memory cards, the ability to process and display multi-media files, built-in still/video camera with flash, high-density photoreceptors for improved picture quality, QWERTY keypads, large screens, organizer suites (reminders, alarms, appointment notes, Wi-Fi Internet access, Bluetooth and cable connectivity for transferring files and pictures, value-added apps such as those created by third parties for the iPhone/iTouch line, games, etc. are both boon and bane to buyers.
Mobile phone users experience high self-satisfaction at being more equipped for anything that comes along and being able to show off to friends. It is also a bane because development and market introduction cycles for handset users are measured in months, certainly far shorter than the typical subscription lock-in period.
By comparison, price is less important than value; the latter can include subscription terms which the mobile service carriers leverage in creative ways. A handy size has plunged to the bottom of subscribers criteria lists owing to the trade-off with larger touch-responsive screens and QWERTY keypads. And core functionality being able to make and accept voice calls and short text messages is not even a consideration anymore because the product category has developed overwhelmingly in favor of consumer delight.
Blackberry.com. Tools of the Trade. Strategic Finance. 2008, 90 (5) 56-7.
CBS Interactive. Smartphones. CNET Reviews. 2010. Web.
Chang, Erik. Droid Vs iPhone 3GS Vs Palm Pre Vs Mytouch 3G: Total Cost Of Ownership. 2009. Web.
Segan, Sascha. Cell Phones. PC Magazine/Ziff Davis. 2009. Web.
Wirola, Lauri; Ismo Halivaara and Jari Syrjarinne. Requirements for the Next Generation Standardized Location Technology Protocol for Location-Based Services. Positioning. August 2009. 1 (1) 1-13.
The term spam is used to refer to the unwanted email messages that are passed over to the receiver through the internet. Such messages may be distasteful, deceitful or sent by error. The spam is not just targeted to specific individuals. In some instances, there can be guessing of the email addresses and the addresses may be obtained from which ever the source and the receiver may not have the knowledge about this. Any person can be a target of spam (Anon 1).
Summary of the case
Several companies do encounter this problem (of spam). For instance, the employees of the Pier 1 Chain Company were spending a lot of time clearing spam from their email boxes each day. The whole email system became a major problem at a time when spam formed a bigger proportion of the total number of email; about eighty percent (Case Study, Para 1).
More so, another company known as the Charter communications, which is the fourth largest television and internet cable company suffered from the spammers. The company handles more than 150 million email messages each day. The spam comprised of more than fifty percent of inbound email. This spam brought about a nuisance to the customers.
Another example comes from the First Banking Services Company. In this company, the employees use desktops to offer the main data processing services in the United States, specifically in the southeastern region. This company is linked with several partners and most of its email messages have large attachments. Here, spam has turned out to be a major problem.
Statement of the problem
The manager in any of the organizations where spam is a major issue is faced with several problems. For instance, due to spam coming on the increase, the employee productivity in the Pier 1 Company suffered greatly. In the effort for the company to overcome this problem, it used a key ward filter but this system did not succeed since it blocked legitimate messages with words that had double meanings. More so, in the Charter Communications Company, the spam created a nuisance since it reached the customers inboxes together with viruses.
Following this, it implies that the managers of these companies as well as other companies that are encountering the problem of spam are faced with the challenge of dealing with this problem in a most effective way.
Conclusion
There is a great need for coming up with a solution or even solutions to deal with the problem of spam. Some of the ways that might be effective, which the managers of the organizations may employ, can be borrowed from those used by the three companies considered in this case.
For instance, the Pier 1 Company came up with a solution to deal with the problem of spam by using MailFrontier Enterprise Gateway and this works most effectively with the email soft ware of Microsoft. The MailFrontier is put in front of Microsoft Exchange in order to carry out the inspection of the emails that come in. It in turn accepts only those messages that are good and does not accept spam. This software has enabled the stopping of up to 98 percent of all spam.
More so, considering the case for the Charter Communications Company, this company these days uses two tools that are unique to deal with spam from IronPort. The first tool is the Iron Port C60 e-mail security appliance that allows the company to divide those who send emails in unique categories and for every sender, this tool offers specific thresholds for the acceptance or rejection of the email messages. The next tool which is referred to as Reputation Filtering complements the first tool and this one allows the administrators to sort email senders basing on the significance of the mail. Those messages that are of poor quality are not accepted.
The First Banking Services Company has come up with its own way of dealing with spam. They use what is called Mail Warden Pro. This spam-fighting solution allows users to come up with rules that differentiate between spam and no-spam in a way that is quite flexible. By employing this, the spam has been reduced from as high as sixty five percent to as low as five percent of the total mails (Case study, Para 5).
All these solutions are seen to be effective and the managers of other organizations that are being faced with the problem of spamming can choose any of the above techniques to deal with spam.
Works Cited
Anon. Dealing with spam. Queens University Belfast. 2010. Web.
Eastern Flight 401 took off from John F. Kennedy airport in New York on 29th December 1972 headed to Miami at 1920 hours. Up to 1132 hours, the trip was routine until it started approaching Miami International Airport. Stockstill, First Officer, observed that a green illumination that indicates that the nose apparatus is correctly locked in place, failed to light up. A damaged bulb had led to this phenomenon. When the light assembly was removed, the plane suddenly started descending gradually and soon lost half of the necessary altitude. This was noticed too late; the plane flew into the ground taking away the lives of 101 passengers (Krause, 2003). Apart from technical faults, poor communication between the crew members, the captains failure to delegate his authority, and lack of crew training are to blame in the plane crash.
Concentration on the Minor Warning Indication
First of all, the greatest mistake of the crew was concentrating on the minor warning indication. When the pilot noticed that the landing gear indicator did not illuminate, every single member of the crew got involved with this problem. Thus, the fixation on a nose landing apparatus position signifying system fault diverted the crews concentration on the apparatus and permitted the decline to be unnoticed (American Scientific Affiliation, 2007). This points at the improper work of the crew.
Communication in the Cockpit
Moreover, much testifies to the fact that the crewmembers failed to adequately communicate in the cockpit. For instance, when the C-chord alarm sounded in the cockpit, none of the crew members commented on it and nothing has been done to counter the loss of altitude (Smith, 2001). This shows that communication in the cockpit was practically absent, that the crew resource management was not successful at a,ll and that the members of the crew were not sufficiently trained before the flight.
Captains Delegation of his Authority
Besides, the captain of the aircraft did not manage to delegate the authority effectively. The problem was in Captain Robert Lofts loyalty to those times when the captains orders had to be obeyed implicitly. When Loft faced a problem, he did not ask his co-pilot to take the aircraft control; he tried to control the plane and remove the technical problems simultaneously, which he failed to do. According to the modern crew resource management training, the crew members should cooperate and constantly interact (Lesage, Dyar, & Evans, 2009). This was also a great mistake that contributed into the accident.
The Fault of Miami ATC
Nevertheless, it was not only the captain and the crew who are responsible for the accident. Mistakes have been made by the Miami ATC as well. When the flight crew has reported about the unsafe gear indication, the air traffic controllers had to check the landing gear; the reported to have failed to do this properly because of the poor lighting (the sun was already below the horizon). In result, no alert was given to the plane when it started losing altitude. Thus, Miami ATC should have given a sterner warning to the aircraft; perhaps, this could have helped to save more lives.
Conclusion
In sum, it was the crews poor communication, lack of training, and the captains ineffective delegation of authority that, together with technical problems, led to the plane crash. Eastern Flight 401 accident, however, had long-term effects on our training systems and methods. It has led to more efforts being directed at carrying out proper crew training aimed at making the communication between the crew members more effective and their response to the emergencies more appropriate. All this now leads to better crew resource management and safer flights.
References
American Scientific Affiliation. (2007). Perspectives on science and Christian faith: journal of the American Scientific Affiliation, Volumes 59-60. Blaine, Washington: The Affiliation.
Krause, S. S. (2003). Aircraft safety: Accident investigations, analyses, and applications. New York: McGraw-Hill Professional.
Lesage, P., Dyar, J.T., & Evans, B. (2009). Crew resource management: principles and practice. London: Jones & Bartlett Publishers.
Smith, D. R. (2001). Controlling pilot error: Controlled flight into terrain (CFIT). New York: McGraw-Hill Professional.
The use of email in communication is widespread throughout the world. This is both in business and personal communication due to its convenience like high speed. It however has risks associated with it such as the fact that deleted messages in an individuals computer may still be stored in some server, the possibility that messages can be read and modified in transit before they reach their destination, and login usernames and passwords are stolen and used by hackers. These lead to eavesdropping, identity theft, false messages, and repudiation among others. Several methods of enhancing email security mentioned below have been developed through research but one key question unto which is most appropriate in protecting sensitive data remains. This paper is based entirely on the article S/MIME V3 White paper by eB2Bcom and is an analysis of several facts already stated in the concerned article.
Primary Enhanced Mail (PEM)
Privacy Enhanced Mail (PEM) consists of extensions to existing message processing software plus a key management infrastructure. It is compatible with Request for Comments RFC822 message processing conventions and transparent to Simple Message Transfer Protocol mail relays (S/MIME V3 White Paper, 2010, p. 2). PEM uses symmetric cryptography and public key management is based on the use of certificates as defined by the International Telecommunications Union Telecommunications Standardization Sector (ITU-T) Directory Authentication Framework.
Pretty Good Privacy (PGP)
PGP also uses encryption and decryption of e-mails to increase the security of e-mail communications. Created by Philip Zimmermann it follows the Open PGP standard (RFC 4880) for encrypting and decrypting data.
Secure Multimedia Internet Mail Extension (S/MIME)
S/MIME like the above-mentioned aims at protecting the email data from unintended sources. The program specifies the application/pkcs7-mime type for data encryption: the whole MIME entity to be enveloped is then encrypted and packed into an object which subsequently is inserted into an application/pkcs7-mime MIME entity.
S/MIME v3 ESS
As stated in the paper being analyzed, this is the latest version of S/MIME and has a number of Enhanced Security Services (ESS) such as secure mailing lists that allow just one digital certificate to be used when sending a secure message to all members of a mailing list, signed certificates binding the signers certificate to the signature itself, signed receipts that provide proof of delivery of the message and successful verification and security labels.
TrustedMIME
TrustedMIME is developed by SSE according to the industry standard S/MIME protocol. It plugs into clients email, providing the user with 128-bit encryption and up to 2048-bit digital signatures. It supports both Microsoft (Outlook, Exchange, and Messaging) and Lotus Notes platforms. TrustedMIME is based on a chosen Public Key Infrastructure (PKI) but users can generate their own self-signed Public Key Certificates in its absence.
Analysis
Cryptography can generally be divided into public and private key cryptography. On the one hand, private key cryptography involves sharing by users of both the decryption and encryption private key. The major hindrance in private key cryptography is the distribution of the private key in large network situations The principle behind public-key cryptography is that of one-way function f were; given x, f(x) can easily be determined but the vice versa is in computationally practical. The advantage of a public key cryptology system is the lack of need for key distribution hence flexibility in other words as the hardware increases larger keys are simply used, unlike the private ones where new keys must be generated and disseminated. Public cryptography though is generally slower
The major limitation of PEM is its incompatibility with MIME, the standard Internet mail format. It uses the public key directory. Since there was no centralized online public key directory in 1989, PEM was designed to operate without any and each signed message includes all of the certificates in the Chain needed to verify the message signature. Two users though cannot securely interchange messages after downloading PEM software, because they first need to have their public keys certified by their local CAs, and their CAs need to be certified by a Policy CA, which itself needs to be registered by the Internet Policy Registration Authority (IPRA). The system was simply designed to work during that period and it solved the problem.
PGP usage has spread since the software was freely available to academics and researchers in the US from inception, and a non-copyright version is availed to the rest of the world. Its advantage is no certification infrastructure is required for usage in a secure manner. The method of key distribution, and the associated web of trust that users build for themselves, is difficult to achieve when numerous users are involved as afore-mentioned under private key cryptography.
The parts of the S/MIME protocol used are of different informational RFCs and require the use of weak cryptography (40-bit keys). The S/MIME v3 standard consists of five parts, Cryptographic Message Syntax (RFC 3852), Cryptographic Message Syntax (CMS) Algorithms (RFC 3370), S/MIME Version 3.1 Message Specification (RFC 3851), S/MIME Version 3.1 Certificate Handling (RFC 3850), and Diffie-Hellman Key Agreement Method (RFC 2631). There is also an Enhanced Security Service for S/MIME (RFC 2634) additional protocol, which is a set of extensions to S/MIME to allow signed receipts, security labels, and secure mailing lists. Both RFC 3852 and RFC 3370 extensions use either S/MIME v3 or S/MIME v2. On the other hand, S/MIME v3 finds use in secure mailing lists only. It is important to note that not all e-mail signatures handle all S/MIME signatures. At times, the appearance of time.p7s attachment on e-mail occurs, and this tends to confuse the users. S/mime, like any other secure webmail signing technique, depends on a browser for code execution, in readiness for the generation of a signature.
The kind of cryptography used in securing a communication channel determines the level of protection ensured. While the public key and private key cryptography each have their own pros and cons, both can be fused in security systems to exploit the better side of each. An example of such a process is according to Vocal Technologies (2009, para. 5), digital envelope, in which private key cryptography is used to encrypt a message m, yielding cipher-text c. The secret key s is then encrypted using public-key cryptography, yielding k. The encrypted message and key pair (c, k) may then be sent securely, where only the recipient may recover s from k. The secret key s may then be used to quickly decode cipher-text c, yielding original message m.
Conclusion
Several factors such as the sensitivity of the data in transit, the cost of installing the security system and maintaining it, the size of network the system are to cover and the impact on users come into play in determining the type of security system incorporated by an individual or firms. Based on the methods of operations of the various standards, the S/MIME v3 ESS is appropriate for large networks, and for smaller networks, the PGE is suitable.
References
S/MIME V3 White Paper. (2010). E-mail security.seB2Bcom ( 2010). S/MIME V3 White Paper: Web.
This introduction involves different works of research on different topics as in the whole paper dealing with systems, databases, programs, and web services. With the introduction of world-wide-web, a dissimilar change is being felt to the use of files as a corporate basis of data. Apart from proprietary designs, a lot of businesses are utilizing the web for the reason of their data formation, storage, and circulation requirements.XML introduction with, advanced control over the meta-data and document structure has evidently improved the likelihood of document organization on the web. Most relational database SQL (structured query language) is the language that is mostly accepted [SQL 86, SQL89, SQL92]. It is being popularly known as object Oriented or query language or object Relational (OR) database (Astrahan& Chamberlin, 1975, p. 1). Its used as a Data manipulation language (DML) also data definition language and data (DDL) query language (DQL). Achieving the broad case-based reasoning prop up for corporate reminiscences will need elasticity in transforming implementations with presented organizations resources and infrastructures(Bancilhon &Delobel, 1992, p. 2). Ontology web language services (OWL-S) are the many efforts being used to enable ontology development for semantic web services.
When unfolding web services, one of the noticeable aspects that desires represented is Quality of Service (QoS), the potential of web service to convene satisfactory point of services as for each factors such suitability, accessibility, performance, truth and reliability compliance security, and regulatory XML is one of the widely used data version and data swap formats. The figure of XML connected developed and under development applications is significant. A lot of research on XML has focused on developing resourceful mechanisms to accumulate and handle XML data either as an element of a relational database or using indigenous XML stores. Though, hiding secured data is important. You can copy a section of these materials without paying the fee as long as you dont use copies and supply for straight commercial benefit. Copying permission is given by Very large Data Base Endowment (VLDB) copyright notice, publication title and its data appear. To republish you pay a fee or special agreement from the Endowment. This paper is about five units namely: move from SGML database and bring SQL, transforming of case-based reasoning, access control for XML-A dynamic Query rewriting approach, web services Ontologies for QOS and general quality evaluations and transitioning existing content: inferring organization-specific document structure. This research paper mainly deals with systems, web services and majorly how to handle databases (Elmasri &Shamkant, 1989, p. 3).
SQL-Standard
SQL original standards date back 1986[SQL 86] the language is still developing the advances in the hypothesis in databases and practice. The original SQL 86 was improved thus the introduction of the version SQL 89. Standard [SQL92, MS92] which also referred to as [SQL2] which was improved from the standard [SQL89] was published to due non-limitations in programming (Melton & Simon, 1992, p. 4). With the introduction of Object-Relational and Object-oriented technologies users of SQL may see the transformation to [SQL96] which is in the extension. This paper mainly focuses on the latest version which is SQL2 (Sengupta & Dillon, 1996, p. 5).
Relational Databases
SQL was intended to work through relational databases. flat tables is the form Data is stored, in which tuples or rows denotes one record of the data, property of data is represented by fields or columns in technical terms also called as Meta-data which is an explanation of data. E.g. BookName in the field is the data in the table while SGML Handbook, data is SGML Handbook therefore meta-data is Book name. SQL principle has a direct correspondence to Generic Identifiers (GIs) in SGML are the meta-data and data in the GIs are the character content. Meta-data in relational databases have more associated information such as data size, data type, and index type. Flat structure in relational databases is one of the problem. In a relational database composite hierarchical structure need to be mapped to corresponding the flat structure (Melton & Simon, 1992, p.5). Entity-relational (ER) is the most used model of data to conceptually represent a relational database. Objects such as name, book city, are referred to as entities (Sengupta & Dillon, 1996, p. 6).
SQL- A Brief Introduction
SQL came from SEQUEL (structured English Query Language) the original version which was developed by San Jose IBMs research laboratory. The proposed SQL3 [SQL96] has completely turned SQL into a programming language i.e. Object-oriented. Not only SQL is restricted to Queries for being known as a Query language, but it also deals with: (DQL) data query language, (DML) data manipulation language, and (DDL) data definition language (Sengupta & Dillon, 1996, p. 7).
DDL properties
This mainly deals with the structures of data in the Database; in SQL you can also call meta-data. The database is the highest level in SQL structure which contains indices, tables, and views. Structures can be deleted and created using DDL statements DROP AND CREATE. E.g. CREATE TABLE CARS (MAKE CHARACTER (15) NOT NULL, REG NO INTEGER). The statement above creates a table called cars with make and reg no. some statements used are, ALTER TABLE, DROP TABLE, AND DROP DATABASE.
DQL- Properties
Mainly focuses on querying data by formulating SQL Query statements. SELECT is the most commonly used SQL statement. SELECT describes what you want with condition FROM.
Querying From the SGML and Suggested Extensions to SQL for Use with SGML
Schema representation distinguishes SGML from relational databases. In SGML query can be defined easily by not using complex statements like in the SQL. Complex structures in SGML are easily represented since you do not require documents breaking into cuts to stand for them while SQL can not deal easily with complex structures. Main extensions required to use SQL with SGML documents initiate the steering of the tree structure, and use other objects to make complex objects. Three primarily proposed extensions in previous work [SD96] are the cascading of the dot (membership or .)Operator, using operator of double-dot ((..) children) and having the capacity to indicate a DTD form in the SELECT clause to divide and make complex types (Sengupta & Dillon, 1996, p. 7).
Theoretically, these extensions are not complete, core SQL can still demonstrate with a few minor extensions that may enable enormous rules to the query language. In Object-Oriented languages, SQL has been adopted and used in Object-Oriented database 02 like the Reloop language. One problem SQL is not a nice query full text language when used in the relational domain. Even SQL con does not conduct schema-independent queries, with the proposed SQL extension eliminating this setbacks to an enormous point. However to be efficient SQL still requires data along with meta-data. With standard SQL, because of its relational database structure there is no need to navigate a hierarchy unlike in SGML where it is important to navigate (Sengupta & Dillon, 1996, p. 8).
DTD stands for document type definition, which can be extracted from the XML documents. Data Descriptors by Example [DDbE] [Diaz and Berman 1999] is a java library, alpha works from IBM can generate DTD fro XML document. You can generate DTDs from document using DTD GEN [Kay, 1999????] Software that is available freely that uses rules that are simple. Similar results can be gotten using XTRACT [Garofalikis et al, 2000] designed in the Bell labs which is the most recent software. Freed (Shafer, 1995, p. 8) well known tool which is dated can generate DTDs from random SGML documents (may be used in most of the XML documents). These tools are to generate DTDs which have been purposely validated for the suitable class document (Berman &Diaz, 1992, p. 2).
Database Reverse Engineering
A rich flow of research has focused on different reverse engineering shortcomings like reengineering of attributes, entities, ternary relationship and binary relationship. The main technological model attempt has been the data relational model. Innovative results and practical have been attained due to stream of researches. Reverse engineering document structures has key differences to the reverse engineering databases which cause a direct application of different results. Document contain unstructured data unlike it is in other relational databases (Berman &Diaz, 1992, p.2). This mainly focuses on the individual document not many. Due to underdeveloped XML schema standards its functionality to documents with extended links is poorly implemented compared to well established foundation in database schemas (Garofalakis &Gionis, 2000, p. 3). Reverse engineering considers some instances to identify with attributes and entities in conceptual and relational schema. While others (DDbE, XTRACT, FRED) which focus on XML solo instance document.
Inferring Document Structures
DTDs can be compiled directly from the XML document using a tool such as described above e.g. (FRED). It is fairly easy to generate DTDs from XML as opposed to the SGML, since well-formedness assures DTD be constantly inferred from the document. The importance of the research is to generate generic DTDs that will capture the information in a particular document instances, but solo document may involve rewriting to be validated with DTD (Goldfarb & Prescod, 2000, p.3).
Document Structure Generation Heuristics
A normative DTD to be generated consider a number of issues from probably inconsistent documents with the similar common structure. Definitive solution can be reached in some cases, where most cases need some experience and intelligence to choose resulting structure.
A Framework for Automated Construction and Transformation of a Case-Based Reasoning
CBR (case-Based reasoning) systems as presently constructed tend to fall into three common implementation models. Task-based implementations customarily have highlighted system goals concerning only the constraints forced by the reasoning task itself. The majority of research systems e.g. focus on certain (frequently idiosyncratic) representation and methods optimized to tackle specific reasoning task, either to display the success of the method or rally specific task goals( Purao &Storey, 2000, p. 4). Recently CBR has been a successful and an increasing of adopting into enterprise systems like (Wat97, SW98) to control corporate data assets by knowledge management e.g. [BFA99]. Enterprise executions act in response to the additional implementation restraints enforced on CBR systems as part of the normally enterprise architecture [KS96].in their views is that CBR integrations must operate in conjunction with database systems, basis of corporate knowledge doings which is the most vital implementation constraint in this perspective is that naturally(Allen &Patterson, 1995, p. 3).
Implementations in CBR make use and provide for the database functionality either in object database systems e.g. [EII95] or relational database systems (e.g. [GW98]). Not all implementations in CBR enterprise systems will make sense. CBR currently budding system that take benefit of new developments in knowledge sharing and editions of the world-wide web e.g. [GW98, Shi98, DFH+98]. Implementation based on the web reacts extra constraints forced on the CBR systems by compliant to structure documents depiction standards for network/web communication in particular Extensible Markup Language [XML] e.g. [BPS98]. The paper is based on coming up with a real reasoning system, not targeting how it presents data. Web implementation most probably might not have a web interface while on the other hand task-based implementation might have a web interface. So it is important to understand (1) how models compare (2) their combination (3) their individual construction and especially (4) and how one may be built by transforming another (Berchtold &Bohm, 1997, p. 6).
Implementation Models
Implementation characterizations are practical at many levels of typical CBR systems, here we protect its useful to separate CBR representation and process. This debate is single limited to a relational database and XML Task-Based molds by not concerning with Standard Generalized Markup Language (SGML) and multifaceted entity plus molds of Object-oriented. Enterprise: incorporating in Case-Based Reasoning realizations with enterprise database regularity restraints is enforced by the systems that are virtually common in the enterprise society. Representations must match up table representation or model of Relational Database Systems (RDBS) whilst process have to allow SQL conventions. RDBSs underlying strength are gained by CBR systems, such as recovery/backup, security, scalability and concurrency control. Data representation on the web is using emerging web-based XML as the tool (Shafer, 1995, p. 5).
Realizing Implementations
This realization involves representation and outlining process for each model, as well as illustrating and defining transformations between models.
Enterprise/RDBS
Involve a case structure and a relational database being associated. CBR general systems can be represented using ER (Entity-Relational) model properly identifying the unlike component of the crisis space. If knearest neighbor (k-nn) retrievals is implemented CBR process can use database systems (Fernandez &Aha, 1999, p.9).
As a result Case-Based Reasoning/folder or database development is seen acting nearly on three stages; (a) uncomplicated storage space: database is employed as medium of storage of data. Exterior or external systems are employed in to retrieve or extract and process cases. You can use SELECT*FROM case_table incase o query.
(b) Simple Retrieval: an easy selection done based on setting used from the target, and the outcome subset is externally processed. Query statements is, SELECT*FROM case-table WHERE conditions. (c) Metric Retrieval: use metric function. Basic query is SELECT * FROM case table ORDER BY metric (k)
Access control fer xml- a dynamic query rewriting approach
Proceedings of the 31st VLDB Conference, Trondheim, Norway, 200
The data making is efficiently accessible, For example an XML tree document, and you may have unlike user groups with different access permissions to parts of the document. Model of security specification must ensure that these policies are imposed appropriately and efficiently. Having a query over a XML secured document tree its good that query outcome contains nodes which the user has permissions to in the context he/she can see(Bray & Paoli, 1998,p. 13). Access control denoted by imposed policies must ensure that reference to data by user is indirectly through set of queries on the tree view. Its proposed in this paper the concept of views as mechanism for XML Access control. Security specification language (SSX) is introduced and policy to rewrite queries of user to impose security constraints (Berchtold& Bohm, 1997, p. 7).
Challenges
Semi structured character of XML data shows that the data is not in normalized formation and makes the job of demining safety views non_trivial.XML data can have replica/missing essentials or omitted attributes. Elements identification is no longer restricted the element value (like documentation in relational model) but depends on the framework, the structure of the path (accessing element from root element) and the children/offspring of the element. Only particular user group can access certain elements contents in same cases in XML, or conditional of visibility based on the value/formation of the elements outer the sub-tree embedded to the element in matter. User groups can also have differing structure for a particular element (Bray & Paoli, 1998, p. 15). Thus Access control in XML should consider nodes structural relationship. The next challenge is the occurrence of numerous policies of access control. Its expensive in constant data change to really materialize and retain each view that implements a safety design.XML research on access control has deled with some of the above outcomes with differing efficiency and degrees of achievement.
Proposed approaches differ from XML cryptography [9], access control language [14, 12], to materialized security views [1, 5], check method and execute. Latest work by Fan et al. [6] came up with an approach that annotated the security limits on the schema formation and repeat XPath queries issued beside the parent XML document. Security annotation expressiveness is restricted to hiding node/sub-tree principles. Enforcing safety constraints focusing on the structural relationships involving elements, which is at least as vital as the values, remains an open question in the XML background and is one of the core donations of this paper (Doyle & Ferrano, 1998, p. 17)
Motivating scenarios
Reflect on an XML database that holds the university information on the human resource. In the hierarchical formation, the university has several departments (dept) which each has an inventory of employees and a location
Preliminaries and Problem Definition
Xml data is usually represented as an embedded node labeled tree formation, in which objects (element, attributes and element contents) is represented by nodes and containment relationship represented with edges among objects. Representing information XML schema it uses this popular two languages XML schema and DTDs. XML structure may be represented as tree as in figure 1, with no significant difference among XML schema and a DTD in demonstrating XML schema information as far as safety is concerned (features such as data types are not needed in XML specific features schema for view specification on security).XPath [4] query declarative language on XML documents and is the hub of other difficult Extensible Markup Language (XML) query languages, like X Query [3]. Query requirements can be declared using an X Path expression by locating the node concerned through the course from the parent document to the basics which serve as the source of the sub-tree to be retrieved (Daengdej & Lukose, 1997, p. 18).
XML security view specification
Security Specification Language in support of XML (SSX) WEB SERVICE ONTOLOGIES FOR QOS AND GENERAL QUALITY EVALUATIONS. Ontologies, semantic Web, QoS, quality.
Literature review
There are numerous Quality of Service (QoS) ontologies explicitly known in literature such as FIPAs; MILOs and many others. These ontologies are purposely catered to evaluating QoS metrics like bit error rate and contains IT related terms for web service like valid for transport protocol name. Instead, a more universal advance is to stipulate ontology for SLMs. This advance of demonstrating additional universal contracts is extra aligned with the spirit of Mid-level Ontology for quality (MoQ) since primary premise of TOVE is in that class view at the design of formal presentation of QoS and other web services constraints. They believe that essential ontologies contain QoS metrics, currency units, measurement methods, measurement units and measurement properties.
Motivating scenarios and competency questions
In common ontological engineering methodology construction of ontology begins with a style form picture of the business state for which ontology systems based will be used on (Stolpmann &Wess, 1998, p. 7). The above statement is known as the motivating scenario. Competency questions are where scenario is parsed status higher rank business questions which ontology system based could be capable to answer (Watson, 1997, p. 8). The example of motivating scenario is when there is time wasting process concerning database queries, emailing reports and editing/building mailing list. Ontology based systems must be able to answer competency questions like: is this a quality of system requirement? Is this requirement satisfied?
MOQ
Requirements Ontology
The frequently asked question is whether QoS is a prerequisite? This kind of question being very logic, in the above question, reason or logic is determined in the first-order, where: QoS-obligation (Q). Q is a variable and can depict the ID or the name of QoS requirement. QoS requirement is a special kind of a quality requirement, which become a requirement. X [ qos_requirement(Q) _ quality_requirement(Q) _ requirement(Q) ]. (2)From the ID many thing s can be reasoned from the requirement irrespective of its inside, e.g. its formation (structure).
Conclussion
This conclusion involves research work and different topics as depicted in the whole paper. It is proved that a number of the semantic Web Ontologies helpful for assessing QoS are extremely implementation alert. The round trip time is one of the ontologies that depicts the germane QoS metrics and may even be strictly combined with Web services habits that permit successful assessment. SHOE or OWL ontology measurement based cannot readily respond to several questions in the design, e.g. is there a series of values that are agreeable? This is because they do more of representing and less of representing QoS (Sengupta, 1998, p. 10). QoS metrics to speak estimate metrics meant for processes of semi-automated business that in fraction but not totally utilize Web services, and in sort to pre-empt possible misunderstandings between Web service requester and provider and there is obviously rate in footing QoS ontologies in ontologies of extra universal superiority concepts or enhancing inter-operation among these groups of ontologies (Kitano &Shimazu, 1996, p. 11).
In relational databases the standard language is SQL (Structured Query Language), but it is so far to create major weight in the SGML world. SQL is present in object-relational and object-oriented databases, while SGML being on the SQL databases. It would be good if implementation of SQL is in SGML systems, and it is not that far. There is a construction of a prototype (model) of the methodology. With the help of XSLT transformations which will be used with implementations and natural and heuristics language programming styles will be implemented with Java TM. Diverse editions of several of these heuristics are being experimented for reverse engineering relational database they will be adapted and improved and the latest heuristics constructed as required with the research proposed. The tool will be tested using the DTDs content management developed for controlling lecture and course content management. It should b done manually to comprehend its actions and how the can be strengthened (Gardingen & Ian Watson, 1998, p. 13)
Categorization is presented for existing models of CBR implementations into three modules and shown how this view leads realistic support for construction and maintaining corporate memories. The universal transformations from an implementation to other, permits for the conversion of current implementations and smooth the progress of the grouping of implementation kinds that convene new and altering task requirements. With the gaining popularity of system of XML and its database, the ability to hide data user groups is significant as making the data present to end users in a friendly and an efficient manner.XML access control challenges comes in due semi-structured character of the XML document compared to world relational. Structural relationships between attributes/elements are sensitive, not only values of attributes/elements in the XML context. SSX (security view specification language) is proposed in XML for DBAs to indicate the security constraints. Arrangement to study the projected primitives from a formal perspective to determine helpful properties and also do algorithmic study to compute limits for the rewrite algorithm.
References
Allen J. and Patterson D. (1995). ACM PODS: Integration of Case Based Retrieval with a Relational Database System in Aircraft Technical Support. Springer Publishing Group.
Astrahan M. and Chamberlinn D. (1995). Communication of the ACM: Implementation of a Structured English Query Language. International Publishing Group.
Bancilhon F. and Delobel C. (1992). The Story of O2: building an Object-oriented Database Dystem. Morgan Kaufmann Publishers.
Berchtold S. and Bohm C. (1997).ACM PODS: A Cost Model For Nearest Neighbor Search in High-Dimensional Data Space. Springer Publishing Group.
Berman A. and Diaz A. (1999). Data Description by Example. Alpha Research Project Documentation. Web.
Bray A. and Paoli J. (1998). Extensible Markup Language. Foxit Software Company. Web.
Daengdej R. and Lukose D. (1997). How Case-Based Reasoning and Cooperative Query Answering Techniques Support RICAD. Springer Publishing Group.
Doyle M. and Ferrario M. (1998). Technical Report: CBR Net:- Smart Technology Over a Network. Trinity College Dublin.
Elmasri R. and Shamkant B. (1989). Fundamentals of Database Systems. Cummins Publishing Group.
Fernandez I.and Aha D. (1999). Case-Based Problem Solving For Knowledge Management Systems. AAAI Publishing Group.
Gardingen D. and Watson I. (1998). A Web Based Case-Based Reasoning System For HVAC Sales Support: In Applications & Innovations in Expert Systems. Springer Publishing Group.
Garofalakis D. and Gionis A. (2000). XTRACT: A System for Extracting Documents Type Descriptors from XML Documents. International Publishing Group.
Goldfarb S. and Prescod P. (2000). The XML Handbook. Prentice Hall PTR.
Kitano H. and Shimazu H. (1996). The Experience Sharing Architecture: A Case Study in Corporate-Wide Case-Based Software Quality Control. AAAI Publishing Press.
Melton J. and Simon R. (1992). Understanding the New SQL: A Complete Guide. Morgan Kaufmann Publishers.
Purao S. and Storey V. (2000).Reconciling and Cleansing: An Approach to Domain Models. New Working Paper.
Sengupta A. (1998). The Design of Docbase: Toward the Union of Databases and Document Management. Tata McGraw Hill.
Sengupta A. and Dillon A. (1996). A Methodological Overview: Extending sgml to Accommodate Database Functions. JASIS Publishing Group.
Shafer K. (1995). SGML 95 Conference: creating DTDs via the GB-Engine and Fred. Boston Publishing Group.
Stolpmann M. and Wess S. (1998). Intelligent System for E-Commerce and Support. International Publishing Group.
Watson F. (1997). Applying Case-Based Reasoning: Techniques for Enterprise Systems. Tata McGraw Hill.
Quality assurance is very vital for designing and developing valuable software. Quality assurance is important as it ensures the organization identifies its real needs and provides justification for having a quality solution that can enable the organization to achieve its business software management requirements. To have a better functioning and up-to-date software solution, quality has to be embraced in its development. This ensures that it will perform to its best and for the organization to get sound results out of it. Several quality standards to facilitate this are of paramount importance. Reliability of the software is one key concern; the software should be able to perform the tasks that have been formulated in the feasibility study and analysis stage (Nee, 1996, p. 12). The software should be understandable and clear to the user.
The language employed in developing and its design should be simple and easy to understand. Proper detailed documentation should be provided to enable both the users and the management to fully understand what is needed in the program (Nee, 1996, p. 58). The program should be complete i.e. all codes input and parameters used to design it should be included and made available. The program should be portable and also it should be able to run on multiple hardware and operating systems with different configurations. Maintainability of the program is also of vital importance (Nee, 1996, p. 55). Moreover, the program should be able to accommodate future updates and current changes. A good program utilizes the system memory, hard disk space and other resources for efficiency (Nee, 1996, p. 57).
Cost Considerations
Costs in developing a new system when not budgeted for correctly can result in the process not going on well as planned. To facilitate the smooth take-off of the project and to keep the costs at minimum checks have to be instituted to prevent recurring expenses during system development and hence realize quality work (Lancrin, 2007, p. 12). There should be clear and well-documented requirements. With clear functions defined there is less vulnerability of the system being delivered which does not meet the users expectations this will prevent a situation whereby the system has to be reworked upon (Lancrin, 2007, p. 13). Testing of the system should start early during the design stage. This should be done with programmers who know how to test it effectively and with proper testing skills. Testing is done in conjunction with the designers of the system to ensure that the system fulfills its intended objectives.
Software for Self Assure
The Self Assure Company experiences costs on laser, Ink cartridges, paper and office stationery. This is a result of every aspect of a clients policy and claims forms being printed. This has been costly in terms of purchasing the stationary and the ink cartridges. To enable Self Assure to bring down these costs, the automation of these services will be of great help. Software to track down and monitor prints when employed will greatly create a print-free environment whereby only vital documents will be printed hence creating a paperless environment. Software to track printing, store clients policy, and claims in a database will help Self Assure to cut down the costs of stationery and the amount spent in buying the ink and laser cartridges.
Major Risks
In program development, there sometimes occur seen and unforeseen risks. But to deliver a quality and dependable system, these risks have to be understandable. Benefit-risk is one of the risks that are associated with project development (Thornton, 2004, p. 36). This risk occurs when the new system developed does not meet the expectation of the users business needs and requirements as based on problem definition. His/ her requirements are not met and the whole process has been a waste of time and resources. This is such a scenario that even if the system is brought in its not able to perform based on users expectations and its prone to underutilization making it obsolete. This risk is brought in circumstances where the project activity to develop and design the new system exceeds the limits of the budget available and so it needs more time for it to be complete. Communication risk is another common risk to contend with when developing a software program (Thornton, 2004, p. 33). When giving information for the new system, the customer has to communicate clearly what the new system should do. He/she has to specify the needed requirements and the expectation which the program has to deal with. The information should be submitted timely. Technology risk is also a major risk (Thornton, 2004, p. 38). Improper specification of the most useful important technology available in the market can cause huge impact on the project. The technology employed should be relevant to the market; it should be compatible with the market and allow room for future updates and maintainability.
To prevent the occurrence of risks, clear checks should be employed. Sound feasibility and problem analysis should be clearly and deeply be done and comprehended before the actual design is started, precise input, output and processes have to be considered to ensure that the system has the expected development tools to enable exact system development (Thornton, 2004, p. 21). Delivery risk should also be considered. An apt guideline has to be formulated and a financial resource has to be put aside for the project and team members. A time frame should be drawn to enable the team and the customer to know when to expect the results. This will ensure that system development has enough budgets for its operations and a clear period to complete the project. Proper feasibility research will help in eliminating technological risk; the team should have a clear programming language that is already available in the market and which will be easier for upgrading and maintenance in the future when a new expansion is needed (Thornton, 2004, p. 32).
Quality Plan
For delivering an efficient system, quality is of the essence. Several quality plans have to be factored in or put into consideration towards the development of a properly functioning system. This is important to enable the organization to feel assured that what they have is what was expected.
Requirement definition
The first step in creating a quality plan is clearly defining the objectives and goals of the new system. The project should also be in line with the formulated company missions and guidelines; this will facilitate program practices that have to be re-engineered before or during implementation to be aptly considered.
Architecture
The system should reflect the technological changes that are already in the market. The design and installation of the system should conform to the nature and current of the business which will allow future improvement when new changes are introduced.
Formal education
Formal education for implementation should be considered. Planning and scheduling time and resources should be provided or included in the project. Good curricular and the mechanism for delivering to the target audience should be known. If the users of the system are to be involved, resources and training support should be defined.
Design reviews and code inspections
Design reviews and Code Inspections should be done and if errors are detected should be worked upon and acceptable standards established. Change should be developed by the company and appropriate actions taken.
Project planning
Its important to plan the project at every phase because it helps the development team to assess its viability and develops an exact system based on the facts and information gathered. This is paramount and ensures that all the stages outlined in program developments are followed to the latter. Planning offers the opportunity to soundly plan, set the project on course and monitor the progress (Barkley, 2007, p. 123). An appropriate statement of work detailing the whole project plan should be reviewed and put together for approval by the management to eliminate any hitches when the work is in progress. This is important because it will serve as a record and a check back for your original plan when a certain change needs to be effected (Barkley, 2007, p. 123). Sometimes it will serve as a defending tool for yourself when the management questions why the project has taken longer than expected.
If the plan is to affect the end-users, a user acceptance training contract should be used whereby the users should be willing to read and sign, in cases where there might be reluctance from them(Barkley, 2007, p. 130). If some issue crops up during testing, and then you could point it to either user who didnt realize it although its normal. And this would serve to defend you in case such scenarios happen. Any successful project planning is all about preventing the occurrence of the system not performing to the expectation and the earlier its covered the better the desired results (Barkley, 2007, 133).
Qualities of a Good Manual
A good system has to have good and understandable documentation to facilitate a proper understanding of the system. For quality to be achieved for manual justification, the manual should describe the system fully and explain how the quality requirements are to be met by the system. It should also offer a sound guideline for implementation and serve as a clear definition of the developed system. The manual should also be in a position to teach about quality requirements to those who are involved, especially the users and the management. It should offer quality practices pertaining to control and other management activities within the company.
The Manual for Self Assure should have a title, to indicate what the manual is all about. It should again have a table of contents to outline what topics the manual has and brief discussion of the same. It should also have a manual overview and why it was designed. The manual should in addition have the company introduction detailing what the company is engaged in. the quality policy and objectives should also be part of the manual to elaborate the strength of the system that has been developed. A reference list should be provided to check for words or phrases which are complex or understandable.
Referencing Facility
A well-referencing facility should be well-drawn and documented. It should show a clear picture of what the system can do and what the system cannot perform. This system has to manage print jobs, handle customer policy applications and customer claims. For the system, only vital reports will be printed and other applications will be stored in the companys database system, this will cut down the cost of ink and papers creating a paperless office. Only the authorized personnel will be allowed to print vital documents by authorization from the management. The system documentation will contain detailed specifications, policy applications, claims, types of prints to be printed, and quality assurance manual and its operational procedures which meet international standards and code of ethics
TimeLine
Self Assure project will take more time for it to be implemented. The timeline below shows a breakdown of the implementation tasks into the more manageable time frame:
Task
Duration (month/ Year)
Scope of problem definition
July 2010- December 2010
System Analysis
January 2011 May 2011
System design
June 2011 October 2011
System Coding and Unit Testing
November 2011 June 2012
Integration Testing
July 2012 November 2012
Documentation
December 2012 May 2013
Report Writing
June 2013 September 2013
Training
January 2014 May 2014
System Implementation
October 2014
Design stage Questions
The design stage is critical in the design of the new system. Several questions have to be asked regarding the system. The external interface of the system has to be asked (Wang, 2002, p. 39). It can not be controlled but some interfaces that cross the boundaries like programming languages need a special mention and special considerations. The external interface should be dealt with before moving on with the program design. The user of the system, this is another question to ask you when in the design stage. Assess your users ability to use the system, more professional users have demanding requirements than ordinary use (Wang, 2002, p. 41). It is also important to consider if the system will need a database or not. If it needs a database, another distributed interface will come in handy (Wang, 2002, p. 50).
The components to use are also a major consideration. You have to know what components to use and the components that the existing project is likely to be reused in the future. Care should be exercised in their reuse because it may be hard to make the current system infeasible (Wang, 2002, p. 56). The security policy of the project is also a concern during the design stage. Proper security has to be constructed into the system. Its good if they are integrated and tested along with the system (Wang, 2002, p. 60).
Organizational Modes
There exist a variety of software engineering methodologies that are currently used to develop systems. And they clearly describe the phases and order in which they are executed, different models work better for different platforms. But they all have a similar pattern.
General Model
In this mode of software engineering, its development produces deliverables that are required in the next stage of system development (Wasson, 2005, p. 50). The requirement is transformed into the design. The code is then translated into the program during program implementation. Testing for this model verifies the deliverables of the implemented phase (Wasson, 2005, 66). The core business information is collected in this phase and its where the project lies.
Water Fall Model
This is the common and modern form of system development life cycle. It is approach is simple to use and understand (Wasson, 2005, p. 66). In each phase, the program has to be completed in its whole before the preceding phase begins. Its simple and easy to use, easy to manage due to the inflexibility of the model. However, there is no working software thats produced until all stages are complete and amount to high risks (Wasson, 2005, p. 68).
V-Shaped Model
In this model, a sequential path of program execution is followed. Each phase is completed before the next stage is embarked on. In every stage, testing is done to eliminate the risks of bugs (Wasson, 2005, p. 80). A test plan is developed in its early cycle before coding is done.
User Interface Test Case
This involves testing the graphical user interface of the product to determine if the system performs its intended objectives correctly (Wasson, 2005, p. 45). It includes testing the system with usual and unusual input; this is done to determine its reliability and efficiency. It is made up of user-based and functional testing of the system via the user interface.
Program Logic-based test design
In this testing; the program flow path is tested uniquely at least one at a time (Wasson, 2005, p. 52). In this testing procedure, the complexity of flow determines the actions that have to be taken.
Input Domain-based test design
In this form of testing, data structures and logic are the key elements forming up the program (Wasson, 2005, p. 53). The data structures modeled in form of data models makes up the input procedures of the system. It describes the entities and corresponding relationships in the program and defines the attributes of each entity and the data.
Control Models
We have two types of event-driven controls models that are common. The first one is the Broadcast control model. In this model, the event being executed is based on broadcast where communication of the events tends to spread over a wide area. The event handler broadcasts the information to available components and that any company that is allowed to handle that event can respond to it. Its advantage is that it is a quick way of responding while being flexible and that any component can interact with one another without knowing its location or name. However, some components are not sure when their events will be handled.
The second type is the Interrupt driven control model. Here, the events are centered on the concept of interrupts and how these interrupts are passed to interrupt handlers. In some situations, interrupts can be similar to event handlers, that the control events. The advantage of this is that there is more fast and flexible because it optimizes speed and it facilitates quick responses to the system events. However, its very hard to debug when an error is noticed. Interrupt-driven control for real-time systems is mostly used where the instant response is critical. The interrupt control model is used in processes of emergency conditions such as in security checks and violations.
Interface Testing
Interface testing is the merging and testing of programs to see if they perform to the desired objectives (Lewis, 2000, p. 22). I think the program must be subjected to interface testing because some errors would have slipped through without being noticed since interface testing involves three phases of testing; program integration, subsystem integration and system integration and each phase has been designed to keep check against specific errors (Lewis, 2000, p. 37).
References
Barkley, B., 2007, Project Management in New Product Development, McGraw-Hill Professional, New York
Lancrin, S. V., 2007, Cross-Border Tertiary Education: A Way towards Capacity Development, OECD Publishing, Paris
Lewis, E. W., 2000, Software Testing and Continuous Quality Improvement, CRC Press, Boca Raton, FL
Nee, P. A., 1996, ISO 9000 in Construction, Wiley-IEEE, New Jersey
Thornton, A. C., 2004, Variation Risk Management: Focusing Quality Improvements in Product Development and Production, John Wiley and Sons, New Jersey
Wang X. J. 2002, What Every Engineer Should Know About Decision Making Under Uncertainty, CRC Press, Boca Raton, FL
Wasson, S. C., 2005, System Analysis, Design, and Development: Concepts, Principles, and Practices, John Wiley and Sons, New Jersey
A routing protocol stipulates the mode in which routers communicate amongst each other. Information is passed on between the routers, defining the routes that the data will be directed between two nodes. Each router has knowledge of the network that is currently attached to it. The protocol ensures that information on the routers identity is first known to its immediate neighbors and this is again passed to the routers that are connected to them. It is through the routing protocols that the network topology is known and the routers determine which route is the shortest in delivering data between the nodes (Black 202).
There are three major categories of routing protocols, namely: Interior gateway routing protocol which passes via link state routing protocol. Examples of protocols that fall under this category are, OSPF (Open Shortest Path First) and IS-IS (Intermediate system to Intermediate System). In this mode, each computer node constructs and creates a virtual map of the network and calculates the best logical route on which to follow in regard to the time it will take. Another category is the Interior gateway routing which passes via path vector. The path is calculated using a pre designed algorithm and in this case it is the Bellman-Ford algorithm. The algorithm dictates that a map be drawn from the information acquired from the routers neighbor. Examples under this protocol are RIP (Routing Information Protocol) and IGRP (Interior Gateway Routing Protocol). The last category is the Exterior gateway routing where the BGP (Border gateway protocol) is protocol used on the internet (Black 203)
The administrative distance is the gauge used by routers in determining the route in which data will be transferred from one router to another in a computer network. The administrative distance dictates the best path in terms of distance and reliability. A static route always has an administrative distance of 1, for IGRP it is 100 and for RIP it is 120 (Black 200).
IGRP (Interior gateway routing protocol): This protocol preceded RIP and combines multiple metrics for data transfer along a route. Such metrics include the bandwidth allocated, delay encountered while the data is being transferred, load and the reliability factor of the route. All theses metrics are combined and a formula used. The formula is adjustable using a set of constants. Hop count in this protocol category is 100. It is also important to note that updates in regard to changes that my have taken place are broadcasted to each router on the network after every 90 seconds (Black 217).
RIP: This protocol exercises in coordination with the distance vector algorithm. It is used in both local area networks and wide area networks. It has a shorter hop count of 15. The algorithm used in RIP is the Bellman-Ford. It is a distance vector protocol. The time duration for the routers to update each other about their locations is done after every 30 seconds. It is because of this short time that mechanisms such as route poisoning and hold downs that could reduce the chances of wrong information or updates being sent were created. This minimized errors resulting from loops in the network. It has a holdown mechanism of 180 seconds. Although not the most preferred protocol, it is popular because of ease of configuration (Black 218).
CiscoWorks is a network management tool that utilizes the web in the way it operates. It is used to monitor the state of the network and also configure all Cisco based devices that could be present in the local area network or wide area network. These devices include hubs, switches, routers and servers. It houses a variety of applications that are best used from monitoring. Examples include, Ciscoview Version 5.4. This one is used for just one device and it displays the Cisco environment from both the front and back end. Colors are assigned hence simplifying monitoring and configuration operations. Another application is WhatsUp Gold version 7.03 which uses a topology map and alerts users with an alarm system. Another application is the Threshold manager which assists in the troubleshooting of errors within the network. It is used to set the threshold on devices using RMON. Finally there is the show commands application which provides the user with detailed and comprehensive material.Information includes protocol information and IOS commands (Black 250).
Works Cited
Black, Uyless. IP routing protocols: RIP, OSPF, BGP, PNNI, and Cisco routing protocols. London: Prentice Hall PTR, 2000.