Among the key requirements for digital application development, one must mention the need to build thorough and detailed information architecture. Thus, the essential processes related to data management will be streamlined, ensuring that the application functions impeccably. A comprehensive and meticulous market analysis will be conducted so that the needs of the target audiences could be identified accordingly (Shivakumar, 2018). Notably, the broad reach that the app in question is going to have as an app for any kind of startup will imply that a particularly detailed market analysis will be needed to highlight the current trends and represent them respectively in the application.
Creating a platform for mockups is another crucial stage in app development, which is why a certain amount of the project resources will have to be spent on buying the necessary software. Thus, a model that would reflect the process of using the app in an authentic setting will be built. Hiring and outsourcing experts in app development, marketing, and financial management should also be deemed as one of the main steps toward building a successful app (Shivakumar, 2018). Since the global digital market is filled with a range of threats to cybersecurity, reinforcing the defense system within the app should be viewed as indispensable.
Finally, the development of the application will require the presence of the front-end staff, who will collect information about the key trends and attitudes within the selected market setting as well as forecast future alterations in the specified area (Shivakumar, 2018). Namely, the needs and tastes of the target audience will be researched thoroughly so that the framework for the application to be developed could reflect the exact stages of the startup creation.
Inventory Management
In the digital setting, the physical inventory used during project implementation is nonexistent. Therefore, technically, the project in question, which seeks to develop an app for startups, does not have the inventory that it needs to manage as physical projects do. The absence of inventory that this project could incorporate for its completion to be coordinated and managed could imply several major positive outcomes. First and most obvious, the absence of physical inventory for the creation of the app will result in zero costs for the inventory-related concerns. The described characteristic of app development projects is one of the essential benefits that the digital context can offer for project management (Shivakumar, 2018). Therefore, the lack of physical inventory management requirements will have to be seen as a potential for allocating project resources more effectively since inventory management will be nonexistent and, therefore, will not require taking any expenses.
At the same time, it is worth noting that the development of an application will imply using quite a number of digital resources actively. Specifically, utilizing the resources needed to develop the necessary software, such as the application of a specific programming language, preferably HTML5, CSS, or Java, and incorporating the tools for designing a user-friendly and functional interface will be needed. Additionally, human resources for testing the app, debugging it, and ensuring that the emerging issues are fixed accordingly, will be required (Shivakumar, 2018). Therefore, the software to be utilized as the main tool for building the app could count as the representation of the project inventory.
Capacity Planning
For this project to be implemented successfully, recruiting competent and knowledgeable employees will be required as one of the fundamental steps toward handling the key tasks and ensuring that the main objectives are implemented. Outsourcing should be considered the best strategy for this project since the support of the best experts in the field will be needed. Specifically, by using outsourcing, the managers of this project will create a robust environment for sharing experience and knowledge, which will help staff members to develop new skills and gain essential insights into innovative ways of managing key tasks (Shivakumar, 2018). In the app development project context, the specified perspective on capacity planning implies recruiting experts by outsourcing them from other companies and encouraging them to participate in interdisciplinary collaboration when developing the app.
The focus on interdisciplinary cooperation will minimize the probability of mistakes that could lead to bugs in the app and its resulting malfunctioning. For this reason, a range of stages in the project management process overlap, as Fig. 1 below shows. Specifically, the Gantt chart demonstrates the necessity for the team to coordinate its actions at the stages such as software development planning, system test planning, creating the software test environment, running the test, etc. As the chart shows, the focus on promoting collaboration between the designer, the analyst, the strategist, the developers, and the quality assurance team at different points in the design of the app will be vital to ensure its proper functioning.
For the purposes of this project, it will also be reasonable to split the staff members into teams while maintaining cooperation among them. Thus, each team will focus on a specific task while receiving vital information about the current progress from the rest of the groups (Shivakumar, 2018). As a result, continuity and cohesion will be introduced into the environment of the project, allowing the participants to utilize their capacity to the maximum.
Process Flowchart
In order to illustrate the main steps of developing the application that will support entrepreneurs in their endeavors of building and running a business, a flowchart depicted below has been designed (see Fig. 2). As the flowchart demonstrates, the key steps of the app development will include the planning phase, the development of the user-friendly interface, the stage of testing and approving, the application of programming, and the quality assurance process (Shivakumar, 2018). However, while seemingly streamlined, the described set of milestones will also represent a series of intermittent interactions within the team so that each new action implemented could become the platform for developing another aspect of the app.
For instance, as the chart shows, customer requirements will be utilized at all of the first four stages of the app development in order to ensure that the project meets the reality of the present-day market and allows entrepreneurs to tailor the process to the specifics of their businesses. However, while being the very first stage and seemingly having a direct connection only to the nest stage of interface design, the results of planning and discussion serve as the framework for controlling the rest of the project, namely, the implementation of its key stages and the management of quality (Shivakumar, 2018). Similarly, the testing and development phase is inextricably linked to the stage of the interface design since the choice of the test is predicated upon the decisions made at the design stage. Finally, quality assurance is implemented not only at the final stage of the project but also during testing.
Following the steps outlined above will guarantee the successful implementation of the project. Therefore, it is vital to introduce cooperation across the teams within the project and guide them toward consistent quality and corporate goals. As a result, the app will be designed to meet some of the most rigid standards of the target market, which is why the project needs to be by the proposed approach for resource allocation.
Reference
Shivakumar, S. K. (2018). Complete guide to digital project management. Apress.
Talkwalker was used to collect information on the brand mentions in media for Virgin and Qantas airline companies. This quick search engine is a reliable tool for analyzing online data on trending content regarding a topic or brand. The data was collected from the online content posted in the English language for three months, from January 15, 2022, to April 14, 2022. The data on positive, neutral, and negative sentiments in response to the mentionings of both brands have been analyzed. The results indicate that Virgin obtained a 5.4% higher rate of positive sentiment, reaching the level of 14.4% in comparison to Qantas, which has reached 9%. The spike in positive sentiment was associated with the appraising feedback on the airlines services and was consistent for both brands. Similarly, Virgin obtained a higher rate of neutral sentiment (66.4%) as opposed to Qantas (50%). As for negative sentiment, Virgin was characterized by a significantly lower rate of 19.4%, while Qantas received 41.1%. The spikes of negative sentiment were associated with Qantas lawsuits and Virgins customer feedback.
Potential Reach
As informed by the collected data analysis results, the two brands potential reach is significantly different. Overall, the potential reach of the target audience indicates the scope of the population that engages in media information mentioning the brand. In particular, Virgins indicator of potential reach was 162.5B, which is significantly lower in comparison to Qantas. Indeed, the potential reach for the Qantas Airlines company was 831.8B, implying a large body of target audience that might comprise the organizations consumer base. However, since the potential reach indicates the number of people who saw the content on the brand, the prevalence of negative sentiment in Qantas in comparison to Virgin implies that the potential reach difference might be adjusted accordingly.
The financial department within an organization or any given entity is one of the most delicate sectors that help ensure high levels of productivity within any profitable and non-profit firm. Most firms have taken measures that help ensure high levels of security are maintained to avoid unnecessary losses. FinTech is one of the most productive innovations within the financial sector that have enabled the maintenance of high-level security in most financial institutions worldwide (Gomber et al., 2018). The emerging technologies have also helped in enhancing quality service delivery to the clients of various institutions.
FinTech innovations are among the worlds leading innovators to develop very effective technologies that have positively impacted the financial sectors globally. Some of the most productive technologies that various financial organizations can adopt to enhance their productivity in the market include mobile banking, artificial intelligence, biometric technologies, big data, blockchains, and open banking APIs. The technologies have efficiently penetrated the market and industry due to the qualitative productivity they have brought about into the operations of various firms in the financial sector, such as the banking industry (Gomber et al., 2018). The main aim of adopting emerging technologies in the banking sector is to ensure customer satisfaction.
Some of the main accelerators for the adoption of emerging financial technologies for security are the rapidly increasing numbers of clients depending on various institutions for satisfactory services. Most banks and other financial institutions have recorded increased numbers of customers whose wants cannot be fully satisfied by the available human resources, which leads to most organizations acquiring various technologies suitable for customer satisfaction. One of the new technological innovations that have had a positive impact on the banking sector includes blockchain.
A blockchain is an innovation that enables any financial organization or system to record big amounts of data, making it impossible for an individual to hack into the system and change any information in the records. Technically, a blockchain can be considered an online ledger for firms and banks dealing with big and confidential data. Blockchain technologies have also brought about cryptocurrencies, which have made safer and faster transactions within online platforms.
Blockchain technologies have helped in maintaining trustworthy relationships between firms and their clients. Most financial institutions globally have embraced internet banking which enables their clients to perform different tasks online. Blockchain enhances secure transactions that are conducted within online platforms. Blockchain has greatly helped in filling in the loopholes that exposed them to risky situations.
The major element required for any financial institution in the banking sector to successfully adopt blockchain technology is providing proper education to the employees and customers about the innovation. Proper education on blockchain technology will enable both financial institutions and their customers to make very productive choices during their financial operations (Paul & Sadath, 2021). Some of the variables required to ensure that all the emerging technologies are fruitful for an organization are the robustness of the technology, understanding of how the new technology operates, and faster internet connectivity. The other necessary element for inserting the innovation is revising the equipment of the firm and the necessary tools to make the system function properly.
Some of the probable changes that will come up as a result of adopting blockchain and other various technologies within the banking sector are reduced over-dependence on human resources, reduced financial costs on security and maintenance, faster speed of transactions, and better personal data control. Most operations will be done through automated machines and applications, which are faster and safer as compared to human resources (Paul & Sadath, 2021). Implementing blockchain is an innovative action leading to the rapid development of the company, its reliability, and safe data usage.
References
Gomber, P., Kauffman, R. J., Parker, C., & Weber, B. W. (2018). On the Fintech revolution: Interpreting the forces of innovation, disruption, and transformation in financial services. Journal of Management Information Systems, 35(1), 220265. Web.
Paul, L. R., &Sadath, L. (2021). A systematic analysis on FinTech and its applications. 2021 International Conference on Innovative Practices in Technology and Management (ICIPTM). Web.
The ICT Division 4 represents an important population that has to acquire a better understanding of the nature of technology in order to make the best use of it after graduation. This means that the key strengths and weaknesses of digital instruments should be identified and analyzed based on real-world evidence and not simulations, offering students an opportunity to try resolving an actual issue that they face on a daily basis. From scientific to mathematical problems, students belonging to Division 4 will have the opportunity to demonstrate their knowledge when utilizing appropriate technology to perform necessary operations. The formal classroom setting should not delimit students in any way, affecting their chances to understand how the new communication systems function and enhance society and create more room for personalization. This is a creative approach to preserving the economy and ensuring that young specialists possess all the necessary digital competencies to contribute to an amended cash flow.
When working with Division 4, teachers should expect students to realize the key principles of e-commerce and inform themselves on the topics of marketing, digital security, and online privacy. These areas of ICT are directly linked to how the government establishes contact with individual specialists and groups of employees since the modern IT-sphere forces workers to become as flexible as possible. Existing social media communications require students to analyze information quickly and access only reliable sources. Therefore, teachers are required to educate Division 4 on the topic of how morals and ethics facilitate the utilization of technology and represent both a threat and an upside to humanity. Education should be all-inclusive enough to protect the integrity of information attained by students while helping them gain access to the means of preserving the integrity of their own data. Overall, ICT will be outlined as an influential technology that is going to multiply its impact on humans during the current decade.
The use of primary sources to research contemporary security issues is essential because the sources offer accurate and timely information. However, although the use of primary sources is highly beneficial, they also are limiting. Some of the current security issues include terrorism, war and conflict, drone violence, cyber threats, intelligence and surveillance of states, and nuclear weapons. Researchers of such issues could employ primary sources including first-hand articles and books, letters, speeches, personal narratives, interviews, audio and video recordings, artifacts, and photographs among others. This paper is a discussion of such limitations including high costs, time consumption, limitations, and unattainability, and methods of overcoming them. Primary sources have several drawbacks but can be addressed by improving the process of data collection.
Primary Sources are Expensive
Creation and collection of data that make primary sources can be expensive. Investigating security issues requires thorough research and the use of fully detailed information (Data Collection Challenges and Improvements 2021). Accessing useful information requires a high investment that could be out of the researchers budget. Contemporary security issues also require urgent investigation due to the advanced effects of security threats. Therefore, researchers need to use the fastest means to collect the necessary data possible and that could be costly.
Depending on the nature of the security issue under research, various methods of collecting primary data are expensive. Investigating an ongoing threat such as cyber-attacks or terror attacks requires the application of all methods of data collection that may yield positive results (Pandey et al. 2021). Such investigations are required to increase the research budget due to the increased cost of information. For example, counter-terrorism officers might need to buy first-hand information about an ongoing attack obtained through personal narratives.
Conducting written research about sensitive issues could also require high budgets. Writing books and articles regarding current security issues requires incurring several types of costs including the collection of information and publishing costs. Researchers using already written primary materials may purchase them at a higher cost (Pandey et al. 2021). Training experts for data collection especially in terrorism and cyber-attack units is costly. The person must have adequate knowledge of what information is useful for the investigation and which is not. Data collecting tools specially used in cyber-attack systems are expensive to acquire and learn their usage.
Economic and technological restrictions can also make the process of primary data collection expensive. Some security issues may demand updating the infrastructure and technological systems of collecting data (Pandey et al. 2021). Such demands can be driven by the digitization of paperwork base and tools to maintain the data collection process. Some institutions no matter how willing may not have the economic capacity to update the systems. Many security-related systems are also sold by the government and the same government is responsible for updating them. Therefore, accessing the already updated digital systems from the government is costly.
Collecting primary data using digital systems is expensive because of the recent information technology restrictions. Today, due to the increased level of cyber threats, the governments of various countries have revised the regulation of IT tools (Pandey et al. 2021). More restrict rules are applied to prevent black hart hackers from accessing IT tools. The cyber security tools have therefore gone high in price making it hard for the researchers to access them. Consequently, collecting data using cyber security tools becomes costly and can prevent further research. Furthermore, the government updated systems may lack the support to include research questions and other sophisticated data collection processes.
Primary Sources are Time Consuming
The collection of primary source data requires more time to ensure the right information is obtained. According to Pandey et al. (2021), most primary sources are accurate because their creation takes time. Collecting the right and accurate data from unknown and known sources may require more time. Researching from scratch especially evidence-based articles could be time-consuming. Evidence-based data collection involves finding participants, setting required questions, and the actual process of data collection. Furthermore, quantitative data collection may require a longer time compared to the qualitative method.
In most sensitive cases where investigators have to go undercover as spies, more time is needed to collect substantive information. Such cases apply to security issues like terrorism, nuclear weapons, and drone violence (Pandey et al. 2021). Investigating correspondent data requires secrecy and patient thus taking longer periods. In ongoing security threats, there might be no time to allow such longer investigation so primary sources could not be useful. Relying on research that takes longer periods in life-threatening cases can lead to lost lives. Complex issues that require the use of more than one method to collect data could also be time-consuming.
Lack of training in methods of primary data collection can also demand more time during research. Untrained researchers may take longer despite having all the necessary tools to collect data (Pandey et al. 2021). Furthermore, if the investigators do not know the purpose or importance of collected information can focus less or fail to ask critical questions. For example in interviews, if the interviewer lacks knowledge about the importance of the topic is less likely to ask the most relevant questions. A lack of training in how and why of the topic under investigation could also put the source in danger. Trained personnel investigating sensitive security issues know not to ask for the information publicly as such actions could put the source at risk. Therefore, more time may be spent when using untrained people to collect data.
Use of Primary Sources could have Limits
Primary data is often limited to places, time, and purpose hence its use is limited. Compared to secondary sources, primary sources covers specific issues in places and time that may not be applicable in different situations (Pandey et al. 2021). Consequently, primary sources can only be used for specific purposes thus reducing chances of being applied to many cases. This concept means that data collected in one place at a time might be irrelevant in the same situation at different places and times.
When collecting the most sensitive data regarding national security, available primary sources may not be useful. For example, when digging into national security issues, the available data may be only for authorized users. Therefore, if the researcher has no permission to access the information, the government might only offer less detailed data (Schuurman 2020). Less detailed data even though primary may be rendered useless especially in urgent situations. Some sources especially personal narratives can also have time limits to access and use of information. For example, witnesses whose lives are threatened might have a short time to provide useful information to a case because they need to hide or be eliminated. Consequently, elapsing of such periods due to the untimely death of the narrator renders that source useless.
Primary Sources are not Always Attainable
Some researches may be too long and involving to be conducted by an institution. A security issue may require the use of primary data but it could be beyond its reach. For example, the only source of credible information may be conducting an evidence-based source that involves materials absent in the organization (Schuurman 2020). The collection of some primary data could also be risky for the researchers. Therefore, no matter how using such material is critical, the researchers are forced to opt for other sources.
Primary sources whose cost of collection outweighs the purpose may be deemed unattainable. If obtaining original data puts the research team in more danger then the team can choose to use other sources. When handling security issues, the priority of the research team is to use materials that reduced the threat (Schuurman 2020). Therefore, if using primary sources poses more danger to the country then the materials are unattainable. Some research especially historical may only require primary sources such as artifacts that may not be available. Unavailability of certain critical sources which only might exist as primary also renders the materials unattainable.
Primary Sources Lacks Quality Assurance
Although primary sources are timely and accurate, the quality of data collection processes is not guaranteed. When conducting time-limited researches, data collected may not be verified. The use of unverified data can lead to losses and worsening of security in contemporary security issues (Data Collection Challenges and Improvements 2021). Studies conducted in a short time can also contain errors and mistakes in the data collection processes. Due to the limitation of time the data is also not cross-checked hence its accuracy and quality are not guaranteed. According to research, most sophisticated issues such as security require the use of quality data collecting processes. Such processes also guarantee the quality of information but still need verification.
The quality of data collected is also dependent on the person collecting and recording it. People recording collected data may incorrectly feed it on paper or digital records (Bowie 2021). The research study can also be worked on in a short time thus cannot review the completeness of the data recorded. For example, personal narrative information obtained a few minutes before use may not be reviewed or verified. Consequently, the completeness or quality of the used data is not guaranteed and so is the outcome of the information.
Improving the Challenges of Primary Sources for Conducting Research on Contemporary Security Topics
The most effective way to improve primary sources is by improving the processes of collecting data. The improvements can be implemented at an institutional level, infrastructure, and practices of data collection. Improvement of the quality of data, training, and reporting purposes is critical (Data Collection Challenges and Improvements 2021). Improving the data collection practices requires commitment from the research team and the willingness to improvise compromise and increase the research budget. The practice requires identifying the challenges and barriers to improvement in the implementation process. The research institution should also be willing to adopt the best data collection practices for improvement to take place.
Training
Training the people investigating a security threat is critical to collecting the right information. The training process should emphasize the importance of data and the benefits obtained from collecting it (Bowie 2021). That way, the researchers will ask the right question and collect relevant information. The training of researchers should also focus on planning, collection, and evaluation of data to ensure that the collected data is of high quality. Trained personnel will also have the confidence to ask security-sensitive questions that could be avoided by less confident people.
Adequately training investigating staff reduced the time consumed in data collection. Knowledgeable personnel will have the idea of where to go and look to find necessary information. They will therefore not spend valuable time roaming around inquiring information from the wrong places (Schuurman 2020). Proper training can also help the research team cut down the data collection process costs. When the trained people go searching for information, they will only pay for extraction but not for leading details. In other words, incurred costs applied to seek information from the wrong places will be cut down.
Monitoring the Outcome of the Information from Primary Sources
Application of data-related key performance indicators can help to monitor implemented improvements. Using the performance indicators could show the level of precision and completeness of the data before application (Schuurman 2020). This will be important to discard invalid and valueless data that would negatively affect the outcome. Organizations should set a key performance indicator that allows quality improvements and not apply incentives that could compromise the quality of data collected. The use of such indicators can also indicate wrongly recorded data during entry.
Providing Audit and Review Stages of the Collected Data
Evaluation and audit of collected data can only apply to less urgent investigations. Audits and reviews are regularly recommended to ensure the completeness and accuracy of information (Schuurman 2020). Solving security issues requires whole and accurate data because incomplete information can give unwanted results. Therefore, auditing the collected data can reduce the chances of getting unexpected results. For example, after conducting evidence-based research, collected data should be reviewed and audited before use. Review and audit can also cut down the cost of reversing unwanted outcomes by providing the exact needed results.
Data collected for urgent use may require the use of more reliable primary sources. Contemporary security issues are more life-threatening thus require urgent action. Fetching data for urgent use may not accommodate audit and review (Schuurman 2020). Therefore, the use of the most reliable and close to issuing sources is critical. For example, during a cyber-attack, instead of interviewing the employees, the investigating team could view the CCTV footage for quicker information. The information obtained here is accurate and does not require a further audit.
Use of Prior-Established Data Collection Practices
Investing in a data collection process immediately there is a need for evidence could be expensive. New demands affect the urgency of information and the cost of collection of data. Therefore, preparing or possible need of data and investing in such practices could reduce the cost when the need for information is prompt (Data Collection Challenges and Improvements 2021). For example, counterterrorism institutions tend to send assets to areas with a high risk of terror attacks. When the attacks happen in the regions, the institution deploys already collected data to solve the attack. Such prior-problem methods of data collection help in cutting down the cost of an investigation.
Setting a Preliminary Budget
Obtaining urgent primary information can be costly and time-consuming especially if it was not planned. Security agencies should be ready for moments when a breach of security demands immediate action (Bowie 2021). The government should therefore set aside a data collecting budget specifically for security threatening issues. Whether a security issue is an emergency or not, it will be expensive if it was not planned. When conducting evidence-based research, it is important to consider involving voluntary participants than compensated ones. Although most voluntary information may at times be inaccurate, the use of voluntary participants cuts down study costs.
Following Policy Protocols to Obtain Data
Although following the set rules and regulations in data collection could be tedious, it is the gateway to obtain limited and restricted data. Some policies dictate various methods of data collection especially those about national security (Bowie 2021). Following these policies might be time-consuming but guarantees accurate and reliable outcomes. For example, to obtain call logs of a suspect, one is required to obtain legal permission from a court. Following such a pursuit may be tiresome and time-consuming but is the only way to have access to information with limited access.
Accessing cyber threats data collection tools also requires the researcher to follow policy rules. Obtaining primary sources with restricted access demands patience and confidence in pursuit (Bowie 2021). Being granted access to some top security data is a process that could take months or years but if the data is worth the wait then the rules should be followed. It is important to note that this concept does not apply to data needed to solve urgent matters.
Making Follow-ups to Obtain More Data from a Source
Some situations do not allow the collection of comprehensive information hence follow-up should be done. Missing data that could not be revealed during the process of research should be followed (Bowie 2021). For example, in an interview about compromising security issues, the interviewee cannot reveal all the details. The interviewer should therefore make a duty to follow the interviewee for more information. In emergency cases especially for organizations that respond to security emergencies, urgent information can be obtained, and later follow-ups are done to reach further information. The follow-up could be done face to face or through a phone call intended to find more information. Such follow-ups bridge the gaps created by unattainable and limited-time types of sources.
General Security and Privacy Considerations
The public and private sectors have outlined guidelines to follow when collecting data. In pursuit of primary data, the researching organizations need to understand and implement the security and privacy consideration guiding the operations (Data Collection Challenges and Improvements 2021). The set rules and regulations are the ones that consider the reliability of collected data and determine whether the reached conclusion is practical or not. Consequently, many researchers have failed to have their articles approved for public publication because they failed to pass the legal requirements.
The researchers should understand the bidding authority upon government institutions to legally abide by the request to offer information that concerns individual security issues. These institutions are therefore expected to remove all information barriers to necessary security information (Data Collection Challenges and Improvements 2021). The organizations should then provide all the needed support including written and oral reports to the investigating teams. Such policies exist to remove unnecessary denial of primary sources of information that could be used to solve critical cases.
It is the responsibility of the researcher to know what institutions allow the release of restricted information to reduce the number of unattainable sources. It is also important to know that institutions have the right to offer the needed security data (Data Collection Challenges and Improvements 2021). This knowledge helps in improving the process of data collection by reducing the number of existing unattainable sources. Through such regulations and guidelines, it is possible to attain personal narratives from powerful personnel who might not be willing to freely cooperate.
Government institutions also help researchers to complete links with missing data. In the process of collecting data, some missing links could be found in sources whose whereabouts are only known by the government (Data Collection Challenges and Improvements 2021). Such information could be distanced for security or privacy reasons and one can only find them through the help of the government. For example, original materials such as artifacts and letters regarding security threats could be stored by the government or the government could be the only means of knowing where they are. Therefore, knowing how to follow such protocols is critical to filling the missing links to the sources.
Conclusion
The use of primary sources has several drawbacks but can be addressed by improving the process of data collection. The application of primary sources to research contemporary security issues is suitable due to the originality of data. However, they can be expensive, time-consuming, and have limited access among other challenges. The challenges in the use of primary materials in researching sensitive security issues such as cyber-attacks, terrorism, nuclear weapons, and drone violence can acquire useful information hard. Challenges in acquiring time-sensitive information could devastate the researchers and slow down the speed of resolving the issue under concern. Therefore, improving the process of data collection is an effective way to reduce such challenges. Training, monitoring the performance of collected data, and following existing policies are amongst many ways to improve the data collecting process. This topic has limited study hence more research need to be done. Further study will offer readers more scope of the challenges and their solutions.
References
Bowie, Neil G. 40 Terrorism Databases and Data Sets. Perspectives on Terrorism 15, no. 2 (2021): 147-161. Web.
Data Collection Challenges and Improvements. Victorian Government. Web.
Pandey, Prabhat, and Meenu Mishra Pandey. Research Methodology Tools and Techniques. (2021). Web.
Schuurman, Bart. Research on terrorism, 20072016: A review of data, methods, and authorship. Terrorism and Political Violence 32, no. 5 (2020): 1011-1026. Web.
Kinja is software that operates as a news aggregator and provides an opportunity to find numerous reports targeted and the general public and presented such as blogs and articles from online newspapers. Any person has an opportunity to retell the information that is interesting to him/her and share it with others. Thus, it can be claimed that the reports found on this website can be used as a tertiary source.
My attention was attracted by the article that was presented by Djublonskopf (2014) under the name Teeth reveal that Canadian dinosaurs knew how to share. The author states that dinosaur teeth were examined by numerous professions for a long time, but one of the recent studies differs from its predecessors, as it used many teeth from many specimens. Having teeth of 76 individual dinosaurs, the scientists received an opportunity to consider multiple ecosystems (Mallon & Anderson, 2014). It was concluded that a lot of plant eaters existed in the same period within one particular territory.
However, two different ankylosaurs, two different ceratopsians, and two different hadrosaurs, the megaherbivores of ancient Alberta had no problems while interacting and got along with each other easily as they ate various types of plants that could be reached at different heights.
The scientists found out that ankylosaurs preferred soft plants and fruit. They had one of the most seasonally-varied diets, which was proved by the pitting of the tiny hard seeds. Ceratopsians preferred tough food, such as thick leaves and twigs that could be found lower than a meter off the ground. Hadrosaurs ate everything, including low-growing soft plants, sprouts, fruit, and twigs that they got from the highest location.
Even though these findings are not revolutionary, they reveal the information that was not discovered earlier.
Reference
Djublonskopf. (2014). Teeth reveal that Canadian dinosaurs knew how to share. Web.
Mallon, J., & Anderson, J. (2014). The functional and palaeoecological implications of tooth morphology and wear for the megaherbivorous dinosaurs from the dinosaur park formation (Upper Campanian) of Alberta, Canada. PLoS One, 9(6), e98605.
The relational database simplifies storage, manipulation, and retrieval of information from diverse sets of tables. The relational database comprises tables, forms, reports, and queries. This report explains the creation and the use of the relational database in tracking books, customers, and shipments of BookTown, a book retail store.
Database Creation
To create tables, the excel files for Books, Customers, and shipments were imported into access using the import wizard.
Entity-Relationship Diagram
The snapshot below displays the entity-relationship diagram created by dragging primary keys into secondary keys.
Query Construction
The queries were constructed using a simple query wizard located in the query wizard in the create tab.
Query 1- Stock levels
The query was created by selecting Book_title, Edition, and Current_Stock in the table of Books.
Query 2-Orders by Customer
In the simple query wizard, the table of customers was selected and fields of their first name, last name, shipment id, and the retail price was selected.
In the design view, the total record was added, the count was selected in the shipments field, and the sum was selected in the retail prices field as shown below.
Query 3-Books Shipped with Total
The query was created by selecting the Book_title, ISBN, Retail Price, and Book-id as a link between tables of shipment and books.
Query 4-Books Published Before 1990
Using a simple query wizard, the Book_title, Edition, and publication date.
In the design view, the publication data was typed <#01-Jan-90# on the criteria record and the field of the publication date as indicated below.
The outcome of the query of books published before 1990 are shown below
Query 5-Customers by Book
In the design view of the query, [Enter Book Title] was entered in the criteria field on the field of Book_Title as depicted below, and unique values were indicated yes in the properties sheet.
Creation of Forms
Using form wizard, all fields of Books, Customers, and Shipments tables were selected in respective steps.
Colored backgrounds were used to enhance the appeal of the forms
Books Form
Customers Form
Shipments Form
Creation of Reports
Reports of Books, Customers, Shipments, and Stock levels were created using report wizard by selecting appropriate fields.
Book report
Stock Levels Report
Conclusion
Tables, forms, reports, and queries are different parts of a relational database. They are very important in database management because they ease the storage, manipulation, and retrieval of data. The use of referential integrity in entity-relationships avoids redundancy of data entry, and thus, improving the accuracy and integrity of data.
When one lurks in an online community, it means that they either just scroll or post so infrequently that it is hard to call participation. There are various reasons for lurking, one of which is feeling the need to acknowledge the basic rules and ways of the community before posting. Another reason is being able to view the content without actually getting much attention (Bateman, Gray, & Butler, 2011). I am the living representation of yet another reason for lurking online: to learn more about online culture and contribute to the research of it.
I have decided to join the WonderHowTo community because the website description interested me. This site is just the one to meet the demand for learning because it is a giant database of peoples experiences. The contributors share their ways of doing something with others, providing tutorial videos and step-by-step pictures. The contributions are viewed and discussed. The users have a chance to thank the contributors by using kudos, which is the analog of liking.
The topics of sharing and discussing are in a wide array, ranging from oriental cooking to electric guitar tuning to smartphone hacks, with the users suggesting their how-tos and getting comments (WonderHowTo, 2016). The newcomers are welcomed easily since the source is open for anyone and one can register with their Facebook account which I did. As far as I can tell, there are quite a several lurkers that are probably getting used to the rules before posting or just scroll and read.
The contributors are not allowed to post if their posts can cause harm or are offensive. Also, any illegally used work issues can be sorted out with the copyright agents. As to the norms, I have never seen a single sexist, racist, political, or in any other way offensive post at WonderHowTo. When discussions occur they are usually related to the topic, which can be a brand (like iOS or Android gadgets or Pop-Tarts) or a particular technique of doing something (like tips on hairstyling or exam cheating). Sometimes the users go personal and express their appraisal or comment on another contributors lack of knowledge on the subject. If the comment uses offensive language, it gets deleted.
The influences at WonderHowTo are persons whose posts are either very useful to hack everyday life or are presented uniquely. For instance, a comic artist Yumi Sakugawa has posted every single lifehack as drawn comics, thus earning the users admiration. The hacks related to iOS/Android gadgets, cooking, health, and style are also the most kudos. The contributors probably do not get paid, but they gain popularity within the community and outside it, when other users share. Thus, they try to impress their audience with the usefulness of their tips, their writing skills, and the quality of the photo- and video materials.
As a lurker, I have come to know WonderHowTo as a useful source. As a participant, I got a chance to acknowledge its friendly atmosphere. My questions on making donuts and making an invisible folder on my smartphone home screen were answered within an hour or two, respectfully. The answers were made by contributors who thanked me for my feedback. Thus, the WonderHowTo community is a good example of a Web culture facility made by people and for people.
References
Bateman, P., Gray, P., & Butler, B. (2011). The Impact of Community Commitment on Participation in Online Communities. Information Systems Research, 22(4), 841-854.
WonderHowTo: Fresh Hacks for a Changing World. (2016). Web.
Nanotechnology is perhaps one of the fastest-growing sectors in the technological field. According to Berger (2010), it involves the study of and manipulation of matter on an atomic and molecular scale, which is very important in manufacturing and other industries. This technology also involves the development of materials and structures with at least one of its dimensions measuring between 1 and 100 nanometres (Berger 2010). The dimension, in this case, maybe the thickness of the device or structure been developed or its length.
Many scholarly articles have been written on the future of nanotechnology. The main interest of these articles and scholarly work is the effects- both negative and positive- that nanotechnology may have on humans. However, it is a fact beyond doubt that this technology can be used to come up with many new devices and materials that can be used in various fields in society. This is for example the application of nanotechnology in agriculture, medicine, chemistry, and electronics among others.
Despite its potential in various fields, nanotechnology has raised many concerns regarding its negative effects on humans and the environment. For example, according to Chourasia and Chopra (2010), concerns have been raised on the toxicity and other impacts that this technology may have on the environment. For example, what effects does the release of nanomaterials have on the environment? How will this technology affect the world economy? These are just some of the questions raised by the increased application of nanotechnology.
Most of these concerns are associated with the fact that nanotechnology is a fairly recent development in the technological world. As such, there are many aspects of the technology that are unknown, including the negative effects that it may have on humans. To address these concerns, government agencies and other stakeholders have realized the importance of regulating nanotechnology. As a result, activities involving this technology (such as experiments and production of nanomaterials) have to be sanctioned by the governments and other relevant authorities.
The Rise of Micro- and Nano- Engineering Industry
As already indicated in this paper, nanotechnology has been applied in many fields and industries around the world. Engineering is one field that has embraced nanotechnology.
Messina, Rivera, Olguin, and Ruiz (2002) regard nanoengineering as a form of engineering that is carried out on the nanoscale. The nanometre, after which this field is named, is a unit of measurement that is equal to a billionth part of a meter. Generally, nanotechnology is synonymous with pure sciences such as biology and chemistry. However, when applied in the field of engineering, the emphasis is more on the engineering aspect of this technology than on its pure science attributes (Messina et al 2002).
The rise of the nanoengineering industry has been associated with the increasing demand for devices with high precision. This is for example the demand for spaceship skins that can resist the effects of overheating as the spaceship enters the atmosphere. Others include the demand for surgical devices with increased precision to enhance surgical procedures.
Demand for Advanced Analytical Techniques in Nanotechnology: A Case Study of Nanoengineering
As the micro-and nano-engineering industry grows in leaps and bounds, there is an accompanying rise in demand for more advanced analytical and characterization techniques for the materials and systems that are used and produced in the process. This is especially so because nano-products are significantly different from their larger geometry counterparts, and as such require equally different techniques to analyze and handle them (Evans Analytical Group [EAG] 2007).
For instance, a nano-product that has a high surface-area-to-volume ratio has increased sensitivity during production and storage. The sensitivity of such a product to impurities and micro-contamination during processing means that the yield of the whole production process is affected (EAG 2007). Micro-contamination increases defects, and it is these defects that lower the production yield.
Take the example of a spaceship outer skin that has been micro-contaminated during production. This contamination might lead to a rough surface which increases the friction between the ships skin and the atmosphere during re-entry. This will in effect reduce the efficacy of the whole process of space exploration as it might damage the ship.
According to EAG (2007), advanced analytical techniques have for the longest time addressed issues such as the cleanliness of products, reliability of devices and contamination of the final product, and the final output of the production process among others. These are the same issues that affect nanoengineering today, and as such, advanced analytical techniques are needed to address them.
Advanced analytical techniques can be used on various aspects of nanoengineering such as the product itself and the production process. This means that the techniques can be used on the tools used during nanoengineering, or they can be used to analyze the final product of the process.
This report is a demonstration of the usage of advanced surface analytical techniques in the process of manufacture and reliability characterization of new nanotechnology materials. This author investigated nanoengineering, a scientific field, and developed experimental protocols using two advanced analytical techniques. This report will describe planning, data acquisition, interpretation, and modeling.
Two advanced analytical techniques were used in this investigation. These are Scanning Electron Microscopy (herein referred to as SEM) and Auger Electron Spectroscopy (herein referred to as AES). This report will provide demonstrations of these two advanced analytical techniques as characterization devices in nano-dimension in the field of nanoengineering. The strengths and weaknesses of these two techniques will be provided, as well as how these strengths and weaknesses can be exploited. The report will also include an analysis of the complementarity of the two advanced analytical techniques, and how this complementarity can be exploited.
Literature Review
Nano-Dimension Materials Characterisation
According to Goldstein (2003), nano-dimension materials characterization is critical in many manufacturing industries that use nanoengineering. This includes the production of semiconductors for the electronics, automotive, and aerospace industries. Many manufacturing industries are today tending towards smaller and lighter characteristics, creating the need for nano-dimension materials characterization.
Characterization of nanomaterials is important in improving product yield and functional reliability in nanoengineering (EAG 2007). It is one way of increasing the volume of production while reducing wastage. It also increases the confidence of consumers in the product, hence increasing demand.
Advanced Analytical Techniques in Nanoengineering
According to Grant and Briggs (2003), analytical techniques in nanoengineering should be consistent with the surface, volume, or region that is to be analyzed. This is given that nanoengineering involves surfaces and volumes that are small. This means that analysis of the low-density distribution of nano-particles should be carried out using an analytical tool with a corresponding analytical area.
Several analytical techniques can be used on surfaces of nanomaterials in nanoengineering. They include the following:
Transmission Electron Microscopy (also known as TEM0
Time of Flight Secondary Ion Mass Spectrometry (also known as TOF- SIMS)
X-ray Diffraction (XRD) among others
Following is a brief analysis of each of these analytical techniques:
Transmission Electron Microscopy (TEM)
This form of microscopy involves the transmission of a beam of electrons through a very thin specimen (Gondran, Charlene, and Kisik 2006). The electrons so transmitted interact with the specimen as they go through it. This process results in the formation of an image that is then magnified and focused on an imaging device such as a film (Gondran et al 2006).
It is noted that the transmission electron microscope is preferred over a normal light microscope in nanoengineering. This is given the fact that it gives an image with a much higher resolution due to the small de Broglie wavelength electrons used (Chourasia and Chopra 2010). This being the case, the scientist can examine minute details of the surface such as a column of atoms.
In an experiment cited in EAG (2007), a group of visibly spherical nanoparticles was examined using TEM. After sonication of the particles in methanol and distribution on a transmission electron microscopy sample grid, it was observed that most of the particles were not spherical. A multitude of shapes that were not discernible under a light microscope was observed. The results of this experiment go a long way in showing the significance of TEM in nanoengineering.
Time of Flight Secondary Ion Mass Spectrometry
This is another advanced analytical technique that is used in nanoengineering. According to Adams, Vaeck, and Barrett (2005), this method involves the use of a pulsed ion beam which peels off molecules from the outermost surface of the specimen. The nano-particles are peeled off from atomic monolayers on the surface of the specimen. They are referred to as secondary ions, giving this technique its name.
The particles so obtained are then accelerated through what Swapp (2011) refers to as a flight tube. Their mass is then determined by gauging the time of flight, which is the amount of time taken by the particles to reach the detector.
This technique lacks the spatial resolution of techniques such as TEM which uses an electron beam (Goldstein 2003). But it contains very low information depth ranging between 10 to 20A. What this means is that the technique is capable of giving the examiner information about surfaces that are coated by materials in monolayers or less (EAG 2007). This aspect makes this technique very important in the field of nanoengineering given that it can detect low levels of molecular contamination on surfaces of specimens.
X-Ray Diffraction
X-ray diffraction is a form of X-ray scattering technique that is used in nanotechnology to examine surface properties of thin materials such as films (Goldstein 2003). It is used to get information regarding the crystallographic structure of such a surface, as well as its chemical and physical attributes. The idea behind this technique is the analysis of the intensity of the scattered x-ray beam that is focused on a sample.
In another experiment reported in EAG (2007), nanoparticles were subjected to an x-ray diffraction process to determine whether the crystallographic structure was similar to what was expected when the material was purchased. The experiment used a SiC whose particles size was in the range of 55 nanometres. It was found that the crystalline size of the primary phase differed from that of nanoparticles. It was also determined that most of the nanoparticles were made up of multiple crystallites, not a single one (EAG 2007). This is an indication of the fact that this technique is indispensable in the nanoengineering field given the need to analyze the surfaces of nanomaterials produced and used in the field.
Data Collection and Analysis
Data Collection
As earlier indicated in the introduction part of this paper, the major aim of this report is to show the contribution of advanced analytical techniques in the nanoengineering field. The paper aims to demonstrate how the techniques can be used as nano-dimensional characterization tools. Two techniques (Auger Electron Spectroscopy and Scanning Electron Microscopy) were used.
Samples for the Study
Several nanomaterials were needed for this experiment. The study settled for nanomaterials made of aluminum and copper deposited on silicon. A similar study using these materials was carried out by EAG (2007), and the current study aims to assess whether EAGs study can be replicated.
However, there are some differences discernible between the current study and that by EAG (2007). For example, EAG did not analyze the use of Scanning Electron Microscopy, an aspect that will be incorporated in the current study.
Some considerations led to the selection of aluminum and copper deposited on silicon over other nanoparticles. One of them is the fact that the material is commercially available and easy to access as a source of nano-particles. As such, the material was simply obtained through a purchase made locally. Secondly, it was the material that was used in EAG (2007), and given that the current study intended to replicate the one by EAG, it was logical to use similar materials.
Analytical Techniques to be used
Before embarking on the analysis of the current study and the collection of data, it is important to provide a brief background on the two advanced analytical techniques that will be employed. These are scanning electron microscopy and auger electron spectroscopy. This background aims to enable the reader to locate the rest of the study within the larger field of advanced analytical techniques used in nanoengineering.
Auger Electron Spectroscopy
This technique involves the use of a high energy primary electron beam within the range of 2 to 10 keV (Egerton 2005). The sample to be studied is exposed to this beam, giving rise to a series of backscattered Auger electrons that are secondary (Goldstein 2003). The resulting electrons are then detected and analyzed. The electrons are then focused on an imaging surface, similar to the SEM process that will be analyzed below.
This technique is used to study surfaces especially in the nanoengineering field, and this is one of the reasons why it was selected for the current study. The energies of the resulting Auger electrons are discrete (Adams et al 2005), and they are a pointer to the elements that are deposited on the surface of the specimen (Egerton 2005). The peak positions of the Auger electrons are the ones that are analyzed to identify the elements present on the surface and their chemical composition.
Scanning Electron Microscopy
This method involves the use of a Scanning Electron Microscope (SEM) which operates on high-energy electrons (Messina et al 2002). A focused beam of these electrons gives rise to a multitude of signals on the surface of the sample specimen.
When the electrons interact with the surface of the sample, distinct signals are developed. Analyses of these signals provide information on the properties of the surface. This includes the texture of the surface, its crystallography, chemical composition, and other attributes of deposits on the surface.
Again, this technique was selected given the fact that it is mainly used in analyzing surfaces of samples, and this was the major aim of this paper.
Observations Made
The aluminum and copper nanoparticles were subjected to the two analytical techniques described above to assess their composition.
Results for Auger Electron Spectroscopy and Scanning Electron Microscopy
It was noticed that the AES technique gave rise to enhanced spatial resolution when compared to that of SEM. It had a higher surface sensitivity, and these conclusions are similar to those made in EAG (2007). This characteristic makes the two techniques highly complement each other.
To exploit the complementarity of the two techniques, the area of the sample that was used for Auger was also used for SEM imaging. The information depth of AES is usually within the range of 30 and 60A, enhancing its complementarity to SEM further.
The figure below is a sample of an Auger Electron Spectroscopy spectrum of a copper grid. The image at the top represents the measured spectrum while the one below it is an indication of a derivative:
Individual nanoparticles were further subjected to elemental analysis and the results compared to those in EAG (2007). The results were comparable, and are presented below:
Elemental Analysis Results for Current Study
Aluminium: C (40%) Si (23) Al (20%) O (17.5%) Cu (0.4%)
Copper: C (38%) Si (27%) O (21%) Cu (14%)
Elemental Analysis Results for EAG (2007)
Aluminium: C (43%) Si (20%) Al (19%) O (18%) Cu (0.6%)
Copper: C (40%) Si (25%) O (19%) Cu (16%)
From the results above, it is clear that the study conducted by EAG (2007) is comparable to the current one.
Weaknesses and Strengths of the Techniques used
Scanning Electron Microscopy
One major weakness of Scanning Electron Microscopy is the fact that it can only be used on solid specimens (Swapp 2011). This means that it cannot be used to analyze the surface properties of gases and liquids. Another weakness is the fact that the size of the sample to be analyzed is limited. This is given the fact that the sample must be small enough to fit into the microscope chamber, meaning that large samples cannot be accommodated (Messina et al 2002). According to Swapp (2011), the microscope chamber can only accommodate a sample that is 10 centimeters long and 40 millimeters thick.
It is also noted that the specimen should remain stable in the vacuum chamber. This means that specimens such as coal and other organic materials that are unstable under low pressures cannot be analyzed using this technique (University of Alberta 2003).
These weaknesses were exploited given that the sample used in this study was solid aluminum and copper deposited on silicon. The sample could be scaled down to small sizes to fit into the chamber and it remained stable under low pressure.
The major strength of SEM is the fact that it is one of the most effective ways to analyze the surfaces of solid materials. This is the reason why it is highly applicable in geology and nanoengineering. The data is also acquired within a short time, usually less than five minutes. The sample does not have to be elaborately prepared to be examined, and the resulting image can be digitized, making it highly portable. These strengths were exploited and this is the reason why SEM was selected for this study.
AES
One major weakness of this technique is the fact that it can only analyze conducting and semi-conducting specimens such as metal (Chourasia and Chopra 2010). Non-conducting specimens have to be processed before they are subjected to AES. This includes coating a specimen such as a spider with gold before putting it under an AES machine. Like SEM, this technique is also used on solid specimens only. Additionally, specimens that are unstable under an electron beam are not suitable for this procedure. It is also not possible to quantify data from AES. These weaknesses were exploited given that the sample used in this study was solid, conducting, and stable under an electron beam. For quantification purposes, SEM was used as a compliment.
A major strength of this technique is the fact that it can be used to analyze almost any solid specimen provided that it remains stable under an electron beam and it can conduct. This technique can analyze the specimen as it is, meaning that special preparations are not needed. However, given that the analysis is carried out in a high vacuum, there is a need to clean the surface of some of the specimens before they are put in the vacuum chamber. The analysis also takes a short time given that a survey spectrum can be obtained in less than five minutes (Swapp 2011). These strengths were also exploited and that is the reason why the aluminum and copper sample was used.
References
Adams, F Vleck, L V and Barrett, R 2005. Advanced Analytical Techniques: Platform for Nano Materials Science, Spectrochimica Acta Part B Atomic Spectroscopy, 60(1); 13-26.
Chourasia, A R and Chopra, D R 2010. Handbook of Instrumental Techniques for Analytical Chemistry, Texas: Texas University Press.
Egerton, R F 2005. Physical Principles of Electron Microscopy: An Introduction to TEM, SEM, and AEM, London: Springer.
Evans Analytical Group [EAG] 2007. Analytical Methods for Nanotechnology, Web.
Goldstein, J 2003. Scanning Electron Microscopy and X-ray Analysis, London: Kluwer Academic.
Gondran, C F Charlene, J and Kisik, C 2006. Front and Back Side Auger Electron Spectroscopy Depth Profile Analysis to Verify an Interfacial Reaction at the HfN/SiO2 Interface, Journal of Vacuum Science and Technology, 24(5); 24-57.
Grant, J T and Briggs, D 2003. Surface Analysis by Auger and X-Ray Photoelectron Spectroscopy, Chichester: IM Publications.
Messina, A R Rivera, S C Olguin, S D and Ruiz, V D 2002. Development of Advanced Analytical Techniques for the Analysis of Subsynchronous Torrsional Interaction with FACTS Devices, Electric Power Engineering, 2002.
Swapp S 2011. Scanning Electron Microscopy (SEM), University of Wyoming, 2011.
University of Alberta 2003. Auger Electron Spectroscopy, Web.
Taking Personnel into Account when an Explosion Suppression System is under Consideration
The issue of staff is critical whenever an organization is planning to acquire an Explosion Suppression System (ESS). These Explosion Suppression Systems are common in vessels whereby over-pressurization is a major concern (Gagnon, 2008, p. 228). These systems are usually designed in accordance with NFPA 69 (Gagnon, 2008, p. 228). Managers and organizations should hire the right people whenever designing their Explosion Suppression Systems.
These systems require constant support and maintenance. The targeted personnel should possess the required competencies and engineering skills in order to manage the system. It is also appropriate to document the responsibilities of different people within the targeted system. The managers should have an efficient chain of command for the hired personnel (Gagnon, 2008, p. 229). This approach will ensure every person understands his or her responsibilities. Managers should also hire the right ESS designers. The targeted individuals should be familiar with different ESSs.
A proper chain of command among the personnel will ensure the workers understand every area of responsibility (Gagnon, 2008, p. 229). The designers of the Explosion Suppression System should ensure every employee monitors the procedures of different ignitions and combustible compounds (Gagnon, 2008, p. 230). These individuals should always maintain, inspect, monitor, and test every ESS. These individuals should use the best strategies in order to deal with every disaster. The targeted staff should also undertake numerous emergency drills. The workers should use new ideas in order to make the ESS more effective.
Continuous training will ensure the staff achieves the best outcomes. Every organization should hire qualified individuals in order to support the Explosion Suppression System. Such individuals should also possess the best skills and engineering competencies. They should also have a clear understanding of the proposed ESS. The approach will ensure the ESS achieves its goals. The practice will eventually make the targeted organization successful.
Methodology to Employ the Correct Profile for Every Anticipated Commodity
Companies and plants should employ the correct profile for every anticipated commodity. A powerful framework is used to select the best ESS. The first approach is getting the right individuals. These people will always offer appropriate suggestions and recommendations (Bennett, 2000). The plant should identify the best ESS depending on the targeted industrial activity. After selecting the correct ESS, it is appropriate to consider the compatibility of the targeted agent.
This situation explains why the agent should be compatible with the commodity (Gagnon, 2008, p. 222). The issue of personnel protection is also relevant whenever selecting a specific profile. Engineers and designers should consider the proximity of every person to every active vessel (Bennett, 2000, p. 322). Engineers should use this knowledge in order to select the right profile for every plant.
The above scenario also explains why designers should select the best agent. This agent will ensure the completed profile does not threaten the lives of different individuals. Designers should also analyze the risks associated with the targeted plant (Bennett, 2000). This understanding will produce an effective profile depending on the anticipated commodity. The above methodology will ensure every designer produces the best Explosion Suppression System (ESS). The design of the ESS should also focus on the roles of different personnel in the targeted industry. In conclusion, engineers of ESSs should understand the effectiveness of different profiles and technologies.
Reference List
Bennett, G. (2000). Review of Technologies for Active Suppression for Fuel Tanks Explosions. Halon Options Technical Working Conference, 1(1), 314-324.
Gagnon, R. (2008). Design of Special Hazard and Fire Alarm Systems. Clifton Park, NY: Thomson Delmar Learning.