Comparative Analysis of Marketing Strategies At Granular.ai

Comparative Analysis of Marketing Strategies At Granular.ai

Conceptual Background and Literature Review

Marketing:

The term marketing is drawn from the base word called the market. The area or location or platform of the process/deal, where the buyers and sellers make a deal to exchange the products and services between them is known as the market typically.

Alternative Data Market:

Alternative data build predictive and improve investment returns by collecting, framing, packing, modeling, and also the distribution of big unstructured and structured data sources.

Marketing Strategy:

Tuning the customer of product and services which is provided by the businesses and reaching the prospective customers by the overall business game plan is known as the marketing strategy where it contains the key brand messaging, company value, data on target customer demographics, propositions, and other high-level elements.

Comparative Analysis:

The two or more objects or ideas are compared and analyzed by the study is known as comparative analysis in general, either by choice or circumstances the different groups are opened to different treatments is observed the relationship between two or more variables which are made to determine and quantify, this process is known as a comparative analysis.

Types of Comparative Analysis are:

  • Individualizing Comparison
  • Variation Finding Comparison
  • Encompassing Comparison
  • Universalizing Comparison

The Objectives example of Comparative Analysis:

  • To know the current technologies in use.
  • To know the different strategies in use.
  • To identify the best of industry.
  • To identify the best and better.
  • To examine the bad and worse.

The steps to follow in the process of Comparative Analysis:

  • Obtaining all the basic data of the subject.
  • Gather the past data of the subject.
  • Examine the recent activities
  • Examine the similar product or service in the market.
  • Understand the micro-market trends of the subject.
  • Compare the best in the market and the subject.
  • Document the result and outcomes.
  • Implement or follow the best.

Benefits of Comparative Analysis of Market:

  • It helps in communication guiding.
  • It helps in opportunity identification in the market.
  • It estimates the status and reputation of the subject.
  • It helps in the current trend establishment.
  • It helps in determining the best promise to be made.

Advantages and disadvantages of Comparative analysis:

Advantages:

  • It provides strength to the subject.
  • The real and public data are used to analyze.
  • It provides a clear picture of the current status of the subject.
  • It mainly focuses on the development of the subject.
  • Disadvantages:
  • The companies and transactional comparisons are quite difficult since numbers and access to the subject are less.
  • It is not so flexible compared to other methods.
  • The data available may not be correct and preferable.

Literature Review with Research Gap:

Paper 1:

In this paper “State Of Alternative Data Market 2019 Pricing Survey Report” by BattleFin and AlternativeData.Org companies in 2018 reveals the view and trend on pricing from data providers and data buyers by comprehensive product pricing survey which allows understanding better about alt-data pricing, strategic insights into alt-data pricing structure and comprehensive glance of the alt-data landscape. The BattleFin conducted a survey that concentrated on sourcing, testing, evaluating and buying alternative data for investment companies and corporation field. AlternativeData.Org makes the simplest of different data by providing them with the newest alternative datasets, jobs, news, tools, and events. This survey brings with the conclusion of,

About 60 percent of information buyers believe datasets are overpriced and about 90 percent of information providers respondents expect the value to extend or remain identical.

The alt-data buyers had only a tiny or zero alt-data budget in 2018, in 2019 increase in the data buyers about 8.8 percent which has the capital allocated to data budgets and entering the market, the data buyers budgets increase about $1million by 2018 to 2019 by the report about 52 percent year to year increase an alt-data budget of above $1million.

The largest buyers of alt-data sets (42.03%) look at average pricing between $50K and $120K per year and the second large group of buyers looks price averaging at between $10K and $50K per year.

Paper 2:

This paper “Comparative Analysis Of Coca-Cola Company And Pepsico Research Paper” by Cuthbert in the year 2019, reveals the dominantly occupied by companies like Coca-Cola and PepsiCo they exhibit a high sense of brand consciousness which they have seen the increased in the area of non-collaboration and ready to drink beverages. This also gives the reference of cola wars, where other companies have come in and putting efforts to grab the market shares but these two companies never gave a chance to come into there market. It is also said that Coca-Cola is enjoying the brand reputation which has spread over the countries and PepsiCo also not deterred from making its mark where it has challenged Coco-12841451 Cola’s dominance.

Paper 3:

This paper “Comparative Analysis Of Various Network Marketing Companies Operating In Himachal Pradesh” by Kamal Kanth Vashisth, Sanjay Kumar and Swetha Thakur in the year 2019 reveals that the network marketing companies such as Amway, Vestige, FLP, and Modicare, etc analysis the diverse motives of individual choice to become a distributor by this study. In Himachal Pradesh, the paper aims to study the distributor’s attitude and perception towards direct marketing with reference to above-listed companies and also with the factor analysis reveals the reasons for buying the product. The study uses the snowball sampling. The questionnaire is used for data collection. In conclusion, the study discloses that many joined as distributors in network marketing to gain and achieve many things such as they developed their skills and earning a good income, etc. Network marketing provides a good opportunity to earn money, it is also defined that the distributors are greatly motivated by the compensation plan, business opportunity and quality of the product which is revealed by the major respondents.

Paper 4:

This paper “A Comparative Study Of Traditional Marketing And Online Marketing” by G Kanuka Raju and G Haranath in the year 2019 reveals the factors that influence and impact a customer’s attitude and perception towards online marketing and traditional marketing, that is which depending on their attitude, time, habits, and knowledge regarding technology. On conclusion of this paper, the most of the people prefer to by the product in the way of traditional marketing since the reasons are like lack of technology knowledge, poor quality of product and fraud in the delivery and other threat of online transactions which is also known that service is expected while purchasing of the product as it is denied to many products in the online purchase if this facility is provided then people may prefer the online transaction.

Paper 5:

This paper “A Study On Comparative Analysis Of Tata Consultancy Service And Infosys On The Basis Of Their Capital Market Performance” by Sujoy Dhar in the year 2017 reveals to identify the different methods which are used to judge the capital market performance of the stock for Tata Consultancy Service and Infosys, between share performance of these two companies they will be compared and contrast and to perform Economy – Industry – Company (EIC) analysis for these two companies. The data analysis is done by relevant statistical and financial techniques and models. The fundamentals are used to judge the intrinsic value of the stock and market price which determines the demand-supply. As the stock is considered to overprice as when the market price of the stock is more than the intrinsic value and price also do not reflect the value. The extent of the company able to justify its shareholders’ wealth maximization motto can be judged by the capital market performance of the company.

Paper 6:

This paper “Comparative Study Of Major Telecom Provider In India” by Ashutosh Mishra, Martyunjay Singh, Dr. Arvind Mittal, and Prof. Archana Soni in the year 2015, this paper reveals that the study of major telecom providers in India such as Barathi Airtel Limited, Vodafone India Limited, Tata communication Limited, Idea Cellular Limited, Reliance Communication Limited, and Barth Sanchar Nigam Limited, who are the major players of telecom industry who all are putting efforts to lead the industry. As a study carried on with the business indicators such as net sales, profit after tax, total income, total expenditure and level of satisfaction with the help of secondary data available on the internet and collected data analysis with the help of prowess software. As the data is available on the internet which is experimented with the age group of 15 to 35 years of old, hence for the other age groups further research can be carried out. This study concludes with the statement that Barathi Airtel Limited is the company leading the industry of telecom with respect to all the BI specified above, this study is limited to the region of Bhopal, it can also carry entire India.

Paper 7:

This paper “A Comparative Analysis On Usage Of Social Media Among Print And Electronic Media Journalists Working In The Hyderabad, Telangana State” by Anitha Kaluvoya in the year 2015, reveals that the print and electronic media journalists working in Hyderabad city tests the use of social media platform namely Youtube, Facebook, Twitter, Wechat, and Whatsapp. It is also found that 1% rule of thumb of print and electronic media journalist participation in comparison. The study had utilized the analyzing tools like random sampling method by collecting the primary data from journalists of print and electronics and also the influence of the Users and Gratification Theory (UGT). This study concludes as the thought of authentic information is not provided through social media by all the respondent workers in both print and electronic mediums and also they appreciated the features of Live on social media hence by these results the print media journalists are more inactive in social media than electronic media journalists.

Cluster-based Retrieval System for News Groups Data: Comparative Analysis

Cluster-based Retrieval System for News Groups Data: Comparative Analysis

Introduction

Abstract

The main aim of this paper is to build a cluster-based retrieval system for categorizing the news groups data and performing a comparative performance analysis using hard and soft clustering methods. Hard clustering being the most popular method, where a data point is given a hard assignment to just one cluster, eg. k-means, hierarchical clustering,etc. On the other hand soft clustering is where the datapoint can belong to more than one cluster, eg: fuzzy clustering, latent semantic analysis, etc. This paper will explore both the types of clustering methods with a comparative performance analysis and evaluation has been performed on the semantic retrieval system by using a test collection of queries.

Motivation

To explore soft clustering techniques on newsgroups data. Even though the most popular clustering methods are ones with hard assignment, the fact that a particular document text could be related to more than one category is lost in case of hard clustering. Thus, the motive was to do comparative analysis on these clustering techniques and build the clustering based retrieval system. Building an IR that meets the user’s needs and intent is always challenging and hence was motivated towards building the semantic retrieval system, as it gives the flexibility to the user with respect to the categorization of topics and get to the similar relevant documents as well.

Potential benefits

The potential benefits of having this system is that it leads to better categorization of news data and would be helpful for building new techniques that could nullify the cons of the hard and soft clustering methods and for better understanding with respect to the semantics of the documents.

Clustering

Clustering, as the name suggests is the grouping of similar objects together, where objects in the same group are more similar to each other than the objects in other groups. It is an unsupervised machine learning technique, which learns from the data about which data points are similar to each other based on various clustering techniques. Text clustering deals with clustering of text documents(unstructured data), where similar documents will be clustered together.

Automatic document organization, topic extraction, information retrieval and filtering have one thing in common, which is text clustering (“What is Text Clustering?”, 2018).

How it works?

Descriptors, set of words that describe the topic matter​ ​are extracted from the document, after which they are analyzed with respect to frequency in which they are found in the document compared to other terms (“What is Text Clustering?”, 2018). Then, the clusters of descriptors are identified and then auto-tagged.

Cluster-based retrieval system

A system that discovers semantically similar terms in the documents, with the hypothesis that documents in the same cluster behave similarly with respect to the relevance in information needs (“Introduction to Information Retrieval”, n.d., p. 350). According to the cluster hypothesis, clustering can increase the efficiency and effectiveness of the retrieval system.

Dataset

“20 Newsgroups” (n.d.) dataset is a collection of 20,000 documents partitioned evenly across the 20 news categories related to technology, religion, politics, recreation, science and miscellaneous categories. These topics can be viewed as a class. In order to perform clustering, we assume as if the labels are not available and find the clusters among the documents, where documents in each group are more similar to each other compared to the documents in other groups. In this paper, the main intent is to evaluate how well the known classes of the dataset are reconstructed using the clustering methods. The following shows the 20 news categories grouped together into 6 main categories that includes Technology, Recreation, Science, Politics, Religion and Misc.

  • comp.graphics
  • comp.os.ms-windows.misc
  • comp.sys.ibm.pc.hardware
  • comp.sys.mac.hardware
  • comp.windows.x
  • rec.autos
  • rec.motorcycles
  • rec.sport.baseball
  • rec.sport.hockey
  • sci.crypt
  • sci.electronics
  • sci.med
  • sci.space
  • misc.forsale talk.politics.misc
  • talk.politics.guns
  • talk.politics.mideast
  • talk.religion.misc
  • alt.atheism
  • soc.religion.christian

Literature Review

Existing work

A lot of comparative studies are performed between hard and soft clustering methods. An unsupervised study is performed where data of similar types are put into one cluster, while data of another types are put into different clusters. Fuzzy C means is a very important clustering technique based on fuzzy logic, showing an experimental comparative study between fuzzy clustering algorithm and the K-means(hard) clustering algorithm (​Bora & Gupta, 2014, p.108)​.

This study has made more research on the two methods and concluded that K-means lesser complexity compared to FCM with respect to the computational time. As, fuzzy clustering involves more fuzzy logic, so its computational time increases comparatively. The choice of clustering method depends on the data we want to cluster. In some situations, we cannot directly consider the data belongs to only one cluster, it may be possible that some data’s properties may contribute to more than one cluster. Soft clustering has proved to perform better for noisy data, and hence used in a number of real-life applications. Chen(2017) informs about how soft clustering is useful in handling very large data sets and in what kind of applications it comes into play. In comparison to hard clustering, it is referred to as more realistic due to the ability of handling impreciseness, uncertainty and vagueness for real-world problems (Chen, 2017, p.102).

A work on evaluating hard and soft-flat clustering methods for text documents shows that fuzzy clustering showed better results than K-means for most of the datasets and hence is also referred to as a more stable method (​Singh, Siddiqui & Singh, 2012, p. 102​). A lot of analysis is performed on the 20 newsgroups as the data is huge and there is a lot of scope to perform exploratory data analysis on the text documents. ​Liu & Croft (2004)​ has used language modelling to show that cluster-based retrieval can perform consistently across collections of realistic size and significant improvements over document-based retrieval in a fully automated manner and without relevance information provided by humans (​Liu & Croft, 2004, p. 286​).

Proposed solution

The main aim of this paper is to build a cluster-based retrieval system for the newsgroups data by performing a comparative analysis on hard and soft clustering techniques like k-means, hierarchical, FCM and latent semantic analysis and evaluating the semantic retrieval system. The newsgroups data contains about 20 sub-categories where sub-categories are grouped into six main news categories like technology, religion, recreation, politics, science and misc. Performance across the clustering methods are compared by using clustering validation techniques such as external validation by validating the clusters against the ground truth available in the dataset. Evaluation of the retrieval system is determined using metrics such as recall and precision.

Methodology

To build the clustering retrieval system on applying the clustering techniques on the news data, with the main categories as: Technology, Science, Politics, Recreation, Religion and Misc.

The steps involved are as follows:

● Data pre-processing

The initial step in dealing with unstructured data is to pre-process the text documents. The data-preprocessing of the documents involves a series of steps.

  1. Tokenization, which involves removing punctuations and standardizing the tokens to all lower case letters.
  2. Removal of stop words like conjunctions will be removed.
  3. Performing stemming of words using the standard Porter’s stemmer algorithm.

The atheism news category which contains around 799 documents have been pre-processed in Python following the above steps which gives a vocabulary size of around 10656. Below, is the screenshot which shows a subset of the vocabulary and the numbers for just the atheism category:

● Build the term-document matrix

Once the data has been pre-processed, we will have the vocabulary of words using which the term-document matrix can be built, which describes the frequency of terms that occur in a collection of documents. This is one of the scoring measures that is widely used in information retrieval or summarization. Other methods could be tf-idf (term-frequency-inverse document frequency), whose values shows how relevant is the term in the given document. The tf-idf of a term in the document indicates the importance of the term in that particular document. Later, tf-idf technique can be used to compare with the performance of term-frequency.

● Apply hard clustering algorithms

Apply hard clustering algorithms: k-means and hierarchical clustering on the text documents using the built term document matrix. K-means​: K-means is one of the simplest and popular unsupervised machine learning algorithms. It starts with an initial group of randomly selected centroids and performs repetitive operations to optimize the positions of the centroids (​Garbade, 2018​). Here, the value of K (number of clusters) is pre-defined or determined using the elbow method.

HAC: ​Hierarchical Agglomerative clustering is a bottom-up clustering method, which treats each document as a singleton cluster and then successively merge pairs of clusters until all clusters have been merged into a single cluster that contains all of the documents (“Introduction to Information Retrieval”, n.d., p.378). Hierarchical clustering creates a hierarchy with some explicit structure which is computationally expensive for big data, as its complexity is quadratic.

● Apply soft clustering methods

On the other hand, soft clustering algorithms are applied to the news groups data. Soft clustering algorithms like latent semantic analysis and fuzzy c-means are applied. Applying soft clustering, helps us identify the topics that are covered by each document, rather than hardly assigning the document to just one cluster.

Latent Semantic Analysis:​ LSA is a text mining dimension reduction technique, where each document is assigned a set of topic loadings (Boling & Das, 2015, p. 9). LSA learns latent topics by performing a matrix decomposition on the term-document matrix using Singular value decomposition. It is typically used as dimension reduction technique (“Latent Semantic Analysis using Python”, n.d.).

Fuzzy-c means clustering: ​Fuzzy clustering, a soft clustering method in which each element has a probability of belonging to each cluster. FCM is one of the widely used fuzzy clustering algorithms, where the centroid of the cluster is calculated as the mean of the points, weighted by their degree of belonging to the cluster (“​Fuzzy Clustering Essentials”, n.d.​).

● Comparative analysis

Having applied hard and soft clustering, a comparative analysis on both hard and soft clusters is performed with respect to the external validation results. As the ground truth is available in the provided dataset, the comparison can be shown based on the accuracy of both clustering types.

● Build the clustering retrieval system

After building the clustering models and comparative analysis, the information retrieval system with the clustered results on querying is built. On searching a query, the relevant documents are retrieved with the clusters related to those user’s query, which helps the user to navigate to the topic of concern, thus considering both generality and specificity of the user’s information needs.

● User interface for the IR system

Building a user interface for the retrieval system, showing up the most relevant documents with the clusters relevant to the user’s query, giving the flexibility to the user for navigating to the topic(cluster) of concern.

● Evaluation of the retrieval system

In order to perform evaluation of the retrieval system, a test collection with queries and relevant documents needs to be prepared. Making use of the test collection, the two frequent and basic measures of information retrieval effectiveness are precision and recall (“Introduction to Information Retrieval”, n.d., p.155).

Precision: ​Precision is the fraction of retrieved documents that are relevant.

P(relevant/retrieved) = #(retrieved items)

#(relevant items retrieved)

Recall: ​Recall is the fraction of relevant documents that are retrieved.

P(retrieved/relevant) = #(relevant items)

#(relevant items retrieved)

Deliverables

  • Document collection and pre-processed data with the vocabulary of words built.
  • Term-document matrix for the vocabulary of terms and the documents.
  • Clusters generated from the hard clustering methods on the news groups data.
  • Clusters generated on applying the soft clustering methods like LSA and FCM.
  • Comparative analysis on the above clusters generated with hard and soft clustering using external cluster validation (ground truth of the data).
  • The retrieval system that shows up relevant documents with the clustered results.
  • Prepare the test collection of queries and perform evaluation of the semantic retrieval system.

Timeline

  • Feasibility
  • Challenges/barriers
  • Dealing with soft clustering algorithms, which could be computationally expensive on dealing with big data compared to hard clustering. Building the test collection of queries for evaluation of the semantic retrieval system.

Project Scope

The scope of this project is within RIT campus and can be used as an add-on to understanding the comparative performance of hard and soft clustering, for building better text-clustering models.

Software

With respect to the software, Python will be used as the programming language for building the clustering models and retrieval system, as this deals with huge data. The clustering models and term-document matrix will be pre-loaded, which can be used for the retrieval system.

For the user interface, Javascript can be used to build a simple user interface for showing the demo of the retrieval system, which in-turn queries the models built using Python.

Fig: Tentative timeline for the project

References

  1. Bora, D. J., & Gupta, D. A. K. (2014). A Comparative study Between Fuzzy Clustering Algorithm and Hard Clustering Algorithm. ​International Journal of Computer Trends and Technology​, ​10​(2), 108–113. doi: 10.14445/22312803/ijctt-v10p119.
  2. Chen, M. (2017). Soft Clustering for Very Large Data Sets. ​International Journal of Computer Science and Network Security​, ​January, 2017​, 17(1).
  3. Singh, V. K., Siddiqui, T. J., & Singh, M. K. (2012). Evaluating Hard and Soft Flat-Clustering Algorithms for Text Documents. ​Advances in Intelligent Systems and Computing Proceedings of the Third International Conference on Intelligent Human Computer Interaction (IHCI 2011), Prague, Czech Republic, August, 2011​, 63–76. doi: 10.1007/978-3-642-31603-6_6.
  4. Liu, X., & Croft, W. B. (2004). Cluster-based retrieval using language models. Proceedings of the 27th Annual International Conference on Research and Development in Information Retrieval – SIGIR 04​. doi: 10.1145/1008992.1009026.
  5. 20 Newsgroups. (n.d.). Retrieved from ​http://qwone.com/~jason/20Newsgroups/
  6. What is Text Clustering? (2018, July 27). Retrieved from https://insidebigdata.com/2018/07/26/what-is-text-clustering/
  7. “Introduction to Information Retrieval” (n.d.). Retrieved from https://nlp.stanford.edu/IR-book/
  8. http://qwone.com/~jason/20Newsgroups/
  9. https://insidebigdata.com/2018/07/26/what-is-text-clustering/
  10. https://nlp.stanford.edu/IR-book/html/htmledition/clustering-in-information-retrieval-1.html
  11. Boling, C., & Das, K. (2015). Reducing Dimensionality of text documents using Latent Semantic Analysis.​ International Journal of Computer Applications​,​ February, 2015, 112(5). Latent Semantic Analysis using Python. (n.d.). Retrieved from https://www.datacamp.com/community/tutorials/discovering-hidden-topics-python
  12. Garbade, M. J. (2018, September 12). Understanding K-means Clustering in Machine Learning. Retrieved from https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1
  13. Fuzzy Clustering Essentials. (n.d.). Retrieved from https://www.datanovia.com/en/lessons/fuzzy-clustering-essentials/
  14. https://www.datacamp.com/community/tutorials/discovering-hidden-topics-python
  15. https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1
  16. https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1
  17. https://www.datanovia.com/en/lessons/fuzzy-clustering-essentials/

Comparative Analysis of CSR of Woodside and Evolution

Comparative Analysis of CSR of Woodside and Evolution

A global initiative to thrive corporate sustainability has congregated pace in recent years. The skills, knowledge and sophistication associated with leading corporate sustainability initiatives have developed to enhance sustainability moves from the edges to the mainstream of corporate activity. (Klettner, 2014). Woodside is a gas producer company whereas Evolution is a gold mining company. Woodside is the largest natural gas producer of the Australia, and the pioneer of the liquefied natural gas (LNG) industry. It is recognized as an integrated upstream supplier of energy (Woodside CSR,2018). Evolution Mining has been evolved from a small company into a well-recognized gold mining company and owns five gold operations. Three of them are in Australia in Queensland, and one in Western Australia, and one in New South Wales (Evolution CSR, 2018) In Australia, it is the third largest gold mining company according to ASX list. Reporting year selected for both of companies based on 2018 turnover.

This paper is mainly focusing on comparative analysis of two Corporate sustainability reports, one report is from Woodside mining Company while the other is from Evolution mining company. The reports selected have been assessed and compared on four criteria’s i.e. transparency and accountability, credibility, completeness and materiality. The objective of critical comparison of these two reports is to determine which report is better in the selected year 2018 and how well each of these reports demonstrate their commitment to sustainability and their approach to use GRI guidelines and the international reporting standards. The last section of this paper comprises on recommendations for one report which does not meet the requirements of criteria on which the reports has been selected and compared.

Chosen criteria for selecting report:

This section of the paper discusses the criteria on with the selected two reports have been chosen and compared with each other. The common goal of both reports is to conduct sustainability, but both are using different approaches for achieving the goal of sustainability.

The first aspect to be discussed is based on completeness The GRI sustainability Guidelines Set principles of completeness, relevance and sustainability context for decision makers about what information is integral to report (Adam,2004) Completeness as an integral part of CSR is concerned with reporting boundaries (i.e. entities included), the scope (i.e. aspects and issues reported), the time frame. Moreover, the information contained within a report must be relevant and include the account of organization behavior and performances in order to meet the test of completeness.

Another key aspect to be considered and discussed is credibility to compare both reports. A credible report means that it contains a verifiable information characterized by supporting evidences without material error or bias and assurance from third party. (Cdsb frameworks, 2019). The external assurance is considered one of the key elements to ensure credibility of CSR and as an independent body it provide adequate assurance to stakeholder for completeness, credibility and materiality (Michelon, 2015)

Comparative analysis:

Transparency and Accountability:

The growing body of engagement practitioners along with clear expectations of engagement from government, community and industry require greater transparency and accountability. (IAP standards). This requirement is intended to exhibit transparency about and accountability for the association’s oversight for environmental policies, strategy and information. Environmental policies can be strengthened by leadership of board committee or senior governing body .Disclosures regarding environmental policies, strategy and information from CEO or executive committee are considered (cdsb frameworks, 2019). Woodside report highlighted its approach of shaping future, its people and communities, its environment and its vision for future across its global activities. In contrast to woodside the evolution’s CSR does not satisfy the requirement of environmental policies supported by CEO letter or senior executive board committee statement. The evolution report also does not provide future vision and goals for sustainability and hence fail to provide transparency and accountability. (Woodside, 2018). External assurance has conducted over External assurance EY has conducted over in Woodside report compared to Evolution CSR which fails to provide external verification from rom any assurance providers and hence creating ambiguity in its performance. Accountability for sustainability has been created in coordination with GRI and International Federation of Accounts and International Council for integrated Reporting (Moravcikova, 2015). In contrast to Evolution CSR the woodside CSR displays high level of transparency and accountability by presenting adequate and favorable information in its report with environmental, social policy, value and principles and future oriented claims (supported with CEO letter. The failure to address material issues(by disclosures) indicates that a company provides positive bias in its interest and could be seen greenwashing (Michelon, 2015).For example, Evolution company sustainability report disclose more information for indicators such as water and waste material but it does not disclose adequate information about biodiversity material indicators and issues related to it.

Credibility:

The standard of legitimacy in assurance provision and reporting must be that the reporting company provides a thorough and honest account of all those actions for which it is believed answerable by its stakeholders (Adam, 2004).Revelation of environmental information is usually made under conditions of additional uncertainty Though, true representation of information may be accomplished by certifying that satisfactory evidence is presented to support disclosures. The information forming the disclosures basis should be verifiable (cdsb frameworks, 2019). In order to raise the societal reporting reliability and to increase stakeholder’s trust in sustainable information has directed companies to perform sustainable verifications so extra validity for the company and the CSR commitments. (Lahbil, 2017). One of the utmost significant methods to enhance the credibility of CSR reporting is to have suitable regulation (Abernathy, 2017). Two features of credibility are known: internal and external. Third-party statements are a vital component of external credibility (Adam,2004). The woodside sustainability report follow GRI guidelines and include external assurance to evaluate its environmental, human rights , labor risk and performance along with positive aspects to maintain its reputation. However, absence of external assurance providers in the Evolution company’s report makes it less credible.

Completeness:

Completeness is related with the degree of covered operations of an organization in the report (i.e. its scope) and the level to which major impacts are presented in a report. The GRI identifies that reporters may access full scope reporting in an incremental fashion but is forceful about the requirement for disclosure and transparency at all steps of report development of an organization (Adam,2004). Woodside report has been organized according to the International Petroleum Industry Environmental Conservation Association (IPIECA) Oil and Gas Industry Guidance and the Global Reporting Initiative (GRI) Standards core-level reporting. (Woodside, 2018).Completeness is not attained by social accounting procedures including only selected stakeholders and ignoring others; accounts of some, but not all (Adam,2004).For example, woodside put on a reliable methodology to stakeholder commitment to manage and comprehend our influences, preserve our social license to function, protect our reputation, and does not relieve any stakeholders in supply chain. While Evolution Company does not deliver balance of the information or a general view of the completeness in its Report. To draw information of value to stockholders in an approach that is consistent to enable a level of comparability between similar organizations, reporting periods and sectors. (cdsb frameworks, 2019). Woodside recognizes the requirement to contain external stakeholders in the reporting procedure to guarantee completeness. Yet, CSR reporting completeness of evolution mining company (are all material CSR risks disclosed) has been revealed to lacks quantitative measures and is poor for evaluating CSR performance.

Materiality:

According to co- operative group the materiality decision making targets the issues which are integral to stakeholders that impact stakeholders’ assessment and decision making. (Jones,2016). For instance, Woodside consider sustainability topics integral to its interest if they reflect substantial economic, environmental and societal impacts or if they potentially impact the assessment and decision making of stakeholders (woodside,2018).

Companies inclined to report on human resource activities concerned with internal workforce than the activities related to employees in supply chain.(Ehnert, 2016).However, Woodside report focus on sustainable supply chains, delivering value to enhance positive outcomes for its stakeholders and displays human resource activities in its supply chain instead of Evolution which does not consider its supply chain under stakeholder category. To improve the stakeholders- company relationship company’ adopt materiality approach and focus on the topic which are significantly critical to organizational goals. (Calabrese, 2016) For instance CSR of woodside in contrast to Evolution mining company consider climate change, Fraud, anti-bribery and corruption key topic of interest under materiality assessments to achieve organizational role.

Recommendations:

This section of the paper enlist and describe some recommendations for Evolution mining company to improve its CSR.

  • Diverse reports have been resulted because of national regulations and the diverse methods for reports creation. Evolution Company is needed to take on globally recognized integrated Reporting Framework that links environmental, social information and financial information on commercial governance in a concise and clear way and in an equivalent format (Birth, 2008). Integrated Reporting collect the substantial information about prospects, performance, governance, and strategy of an organization in such a way that reveals the social, environmental, and commercial context within which it functions (Dumay, 2016). Though social actions reporting is voluntary, but several European governments are employing obligatory laws on reporting (e.g. France and Spain), whereas in other countries the international reporting standards adoption are being grown rapidly (Birth,2008). The Evolution Company should observe and report all important material information about its impacts in the areas and on stakeholders, stakeholders are considered to be imperative. (Adam,2004). The stakeholder involvement standards and auditing are correspondingly important for the Evolution CSR. AA1000 has developed progressively known as the reference framework in this field as it delivers strategies on how to involve stakeholders effectively in CSR administration processes. (Moravcikova, 2015).
  • Evolution company would address integrity assurance in its sustainability reports to ensure credibility. For integrity assurance it is essential to address two interdependent and complex issues. The first one relates to the credentials of assurance providers, independence and, their technical competence in defining the length audit and scope of the contents of report. (sethi,2017). One of the main elements to ensuring reliability of sustainability reports alike is external verification (Adam, 2004). The formal assurance provider i.e. Specialized integrity assurance provider firms (such as Bureau Veritas or ERM) and public accounting or auditing firms (one of the Big Four accounting firms such as EY, KPMG, PWC, and Deloitte) are known as an external assurance. They submit and generally include a formal audit certificate in CSR report. Assurance given by these companies is of greater credibility, that’s why they can play a fundamental role in increasing credibility of Evolution CSR. It is also expected that evolution mining company and other firms that are working in socially and/or environmentally sensitive industries will be more likely to get their CSR reports assured and will also achieve higher quality assurance on their CSR reports. (sethi,2017).
  • It has been perceived that future vision planning will be helpful in directing future of humanity and focuses on sustainable development achievement through role cooperation’s play by target and indicator setting. Indicators are standpoints and guidelines that are assisting in environmental sustainability assessment. Indicators are used as tool for sustainability actions measurement; specific targets are set by companies that they desired to achieve. A target defines the determination of an objective within a definite time frame, so that a goal can be achieved. (Schwarz, 2019). In October, at the North Rankin Complex, first robotic trial in Australia was carried out on an offshore platform by Woodside. This is an initial step towards demonstration of extended capabilities of company in remote operation. (Woodside, 2018). Target setting is desired to be changed in order to ensure innovative Evolution mining company. In 2018, 93% of the 250 largest corporations published such a report (United Nations, 2018), representing success in achieving specific targets (The SDGs Report, 2018).

Conclusion:

The goal of this work was to select two corporate sustainability reports , highlighting their strengths and weakness with each other through a comparative analysis and moving ahead to enlist some recommendations for one report which does not meet the requirements of criteria on which the reports has been selected and compared. The result indicates multiple significant differences between corporate sustainability reports of Evolution and Woodside mining company. This implies that in the year 2018,the CSR of Woodside best demonstrate its commitment to sustainability ,its approach to use GRI reporting guidelines are in accordance to the international reporting standards and hence making CSR of Woodside a best choice against CSR of Evolution mining company.

References:

  1. Abernathy, J., Stefaniak, C., Wilkins, A., & Olson, J. (2017). Literature review and research opportunities on credibility of corporate social responsibility reporting. American Journal of Business.
  2. Adams, C. A., & Evans, R. (2004). Accountability, completeness, credibility and the audit expectations gap. Journal of corporate citizenship, (14), 97-115.
  3. Birth, G., Illia, L., Lurati, F., & Zamparini, A. (2008). Communicating CSR: practices among Switzerland’s top 300 companies. Corporate Communications: An International Journal.
  4. Calabrese, A., Costa, R., Levialdi, N., & Menichini, T. (2016). A fuzzy analytic hierarchy process method to support materiality assessment in sustainability reporting. Journal of Cleaner Production, 121, 248-264.
  5. CDSB. (2019, December) CDSB framework for reporting environmental and climate change information. Retrieved December, 2019 fromhttps://
  6. www.cdsb.net › files › cdsb_framework_2019_v2.2.pdf
  7. Dumay, J., Bernardi, C., Guthrie, J., & Demartini, P. (2016, September). Integrated reporting: A structured literature review. In Accounting Forum (Vol. 40, No. 3, pp. 166-185). Taylor & Francis.
  8. Ehnert, I., Parsa, S., Roper, I., Wagner, M., & Muller-Camen, M. (2016). Reporting on sustainability and HRM: A comparative study of sustainability reporting practices by the world’s largest companies. The International Journal of Human Resource Management, 27(1), 88-108.
  9. Evolution mining company corporate sustainability Report (2018). Retrieved from https://evolutionmining.com.au/wp-content/uploads/2018/10/18586
  10. IAP2. (2017, February 2). IAP2- Quality Assurance Standard for Community and Stakeholder Engagement. IAP2- Quality Assurance Standard for Community and Stakeholder Engagement. IAP2.Retrieved February 2, 2017, from https://www.iap2.org.au/coresoftcloud001/ccms.r?PageID=10122&tenid=IAP2
  11. Jones, P., Comfort, D., & Hillier, D. (2016). Materiality in corporate sustainability reporting within UK retailing. Journal of Public Affairs, 16(1), 81-90.
  12. Klettner, A., Clarke, T., & Boersma, M. (2014). The governance of corporate sustainability: Empirical insights into the development, leadership and implementation of responsible business strategy. Journal of Business Ethics, 122(1), 145-165.
  13. Lahbil, R., & Wahabi, R. (2017). Reporting Corporate Social Responsibility: At The Pursuit Of Legitimacy-A Literature Review. Eurasian Journal of Business and Management, 5(3), 68-81.
  14. Michelon, G., Pilonato, S., & Ricceri, F. (2015). CSR reporting practices and the quality of disclosure: An empirical analysis. Critical perspectives on accounting, 33, 59-78
  15. Perez, F., & Sanchez, L. E. (2009). Assessing the evolution of sustainability reporting in the mining sector. Environmental management, 43(6), 949-961.
  16. Sauerwald, S., & Su, W. (2019). CEO overconfidence and csr decoupling. Corporate Governance: An International Review, 27(4), 283–300. https://doi.org/10.1111/corg.12279
  17. Schwarz, J., & Pegels, L. (2019). Earth3 measures in sustainability reporting: Reinforcing transformational change through indicator and target setting
  18. SDGs Report. (2018). The Sustainable Development Goals Report. Retrieved June 20, 2018 from https://www.un.org/development/desa/publications/the-sustainable-development-goals-report-2018.html.
  19. Sethi, S. P., Martell, T. F., & Demir, M. (2017). Enhancing the role and effectiveness of corporate social responsibility (csr) reports: the missing element of content verification and integrity assurance. Journal of Business Ethics, 144(1), 59–82. https://doi.org/10.1007/s10551-015-2862-3
  20. Woodside. (2018), sustainable development report. Retrieved from https://www.woodside.com.au/investors/reports-publications/report/sustainable-development-report-2018

A Project Report on Sentext: A Comparative Analysis on Different Classifiers for Text-Based Sentiment Analysis

A Project Report on Sentext: A Comparative Analysis on Different Classifiers for Text-Based Sentiment Analysis

Abstract

Sentiment analysis and opinion mining is the field of study that analyzes people’s opinions, sentiments, evaluations, attitudes, and emotions from written language. It is one of the most active research areas in natural language processing and is also widely studied in data mining procedures. The growing importance of sentiment analysis coincides with the growth of various online activities such as product/movie reviews, forum discussions, blogs, twitter and other social networks.

With the help of supervised learning and precise datasets, I can get amazing results for predic-tion of sentiments. In the realm of sentiment analysis and opinion mining, researchers often explore various approaches and techniques to improve the accuracy and effectiveness of sentiment prediction. One valuable resource that contributes to this field is the motivation essay, which offers personal insights and narratives that shed light on the intricate relationship between language, emotions, and sentiments. By incorporating the perspectives shared in the motivation essay, researchers can further enhance the precision and efficacy of sentiment prediction models.

There are several challenges in opinion mining though. For instance, a word that is considered to be positive in one situation may be considered negative in another situation. Take the word ”long” for example. If a customer said a phone’s battery life was long, that would be a positive opinion. If the customer said that the phones start-up time was long, however, that would be a negative opinion. Another chal- lenge is that people dont always express their opinions the same way. As a result, I can have differing opinions and a slight change in the sentence can change the whole meaning. These differences clearly show that an opinion system trained to gather opinions on one type of product or product feature may not perform very well on another.

The project is targeted for a comparative analysis on the different classifiers that can be used for text-based sentiment analysis. It also uses context based regularisation to eliminate inconsistencies as shown in the previous examples. I train the machine on a large dataset and predict the output sentiment of a given paragraph on different classifiers. I check their respective accuracies and choose the classifier that gives the best result among them.

Comparative analysis on different classifiers for text-based sentiment Analysis

Chapter 1

Introduction

1.1 Sentiment Analysis

Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral. Its also known as opinion mining, deriving the opinion or attitude of a speaker. A generic use case of this topic is how different people feel about a particular topic.

1.2 Text-Based Sentiment Analysis

Say, you see a new smartphone on an online store. Different people may have different opinions about the product. Humans are fairly intuitive when it comes to interpreting the tone of a piece of writing. But, if we want a statistical analysis of the reviews of the particular product, the task will become cumbersome for humans to process alone. To accomplish this task, we will need a machine to process the data. The human language is complex. Teaching a machine to analyse the various grammatical nuances, cultural variations, slang and misspellings that occur in online mentions is a difficult process. But with the right training and historical datasets, a machine can produce good results on the data. Sentext is a text sentiment analyser tool which determines the polarity of a given paragraph by classifying them into positive or negative sentiment. The project is targeted for a comparative analysis on the different classifiers that can be used for text-based sentiment analysis.

1.3 Motivation

Due to a large number of user input data these days, analysis and classification of user opinions become a tough task. To overcome this, text based sentiment analysis will be helpful in every aspect without human intervention.

Throughout this project, we perform the following activities:

  • Find out different approaches of sentiment analysis.
  • Deduce the importance of text-based sentiment analysis.
  • Elaborate the process of sentiment analysis using text.
  • Explain different approaches currently available for text-based sentiment analysis.
  • Compare different classifiers for sentiment analysis .
  • Implement the analysis technique with different classifiers to get the best results.

Chapter 2

Literature Survey

2.1 Classifiers in Machine Learning

In machine learning,classification is divided into two types:

  • Supervised Classification: All data is labeled and the algorithms learn to predict the output from the input data. Examples of such classifiers are: Naive Bayes, Support Vector Machines, Maximum Entropy, Decision Tree, Random Forest, Neural Networks, Regression.
  • Unsupervised Classification: All data is unlabeled and the algorithms learn to inherent structure from the input data. Examples of such classifiers are: K-Means clustering, Hierarchical clustering, Hebbian learning model, Expectation-maximization algorithm.

2.2 Different Classifiers for text-based sentiment analysis

2.2.1 Naive bayes Classifier

The Naive Bayesian classifier is uncomplicated and widely used method for supervised learning. Bayes’ theorem was named after the Reverend Thomas Bayes (170261), who studied how to compute a distribution for the probability parameter of a binomial distribution. It is one of the fastest learning algorithms, and can deal with any number of features and classes. Naive Bayesian performs incredibly well in a variety of problems. Furthermore, Naive Bayesian learning is robust enough hat small amount of noise does not disturb the results. Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. It is not a single algorithm for training such classifiers, but a family of algorithms based on a common principle: all naive Bayes classifiers assume that the value of a particular feature is independent of the value of any other feature, given the class variable.

2.2.2 KNN Classifier

K-Nearest Neighbour is non-parametric classifier which classify unknown points by using nearest neighbor. A (k,l) nearest neighbour classifier – given a feature vector x

  • Class with most votes in k nearest examples
  • But if less than I votes dont classify
  • What are the nearest neighbours? – search?
  • What should be the distance metric? Feature Vector: length, colour, angle – mahalanobis?
  • Can have excellent performance for arbitrary class conditional pdfs.

In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and regression.[1] In both cases, the input consists of the k closest training examples in the feature space. The output depends on whether k-NN is used for classification or regression. The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples.

In the classification phase, k is a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among the k training samples nearest to that query point.

2.2.3 SVM Classifier

Support vector machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. Support vector machine can be referred to as supervised machine learning algorithm. Important property of SVM is that their ability to learn can be independent of dimensionality of feature space. It can be used for classification and regression problems. There are several advantages of using SVM to train the system. SVM tends to deal with high dimensional data sets. SVM do not address to the local minimum of the error rate. This caused to increase the accuracy of SVM.

2.2.4 Decision Tree classifier

Decision trees are one of the most widely used machine learning algorithms. They are popular because they can be adapted to almost any type of data. They are a supervised machine learning algorithm that divides its training data into smaller and smaller parts in order to identify patterns that can be used for classification. Whenever an unknown label is given, inorder to classify it, the data is passed through the tree. At each decision node a specific feature from the input data is compared with a constant that was identified in the training phase. The decision will be based on whether the feature is greater than or less than the constant, creating a two way split in the tree. The data will eventually pass through these decision nodes until it reaches a leaf node which represents its assigned class.

2.2.5 Neural Networks

Neural networks are used in a wide variety of domains for the purpose of classification. The main difference for neural network classifiers is to adapt these classifiers with the use of word features. We note that neural network classifiers are related to SVM classifiers. Each unit receives a set of inputs, which are denoted by the vector Xi, which in this case, correspond to the term frequencies in the ith document. Each neuron is also associated with a set of weights A, which are used in order to compute a function of its inputs.

2.2.6 Random Forest

Random Forest consists of many classification trees known as tree classifiers, which are used to classifies the news articles based on the categorical dependent on text. Each tree gives a class for the input text documents and the class with highest weight words will be chosen. This classifier’s error rate depends on the correlation between any two trees in the forest and the strength of the each individual tree in the forest. In

School of Computer Engineering, KIIT, BBSR 5

A Comparative analysis on different classifiers for text-based sentiment

Analysis order to minimize the error rate the trees should be strong and independent of each other.

Chapter 3

Software Requirements Specification

Sentext uses machine learning algorithms to run on a system. For a smooth execution, a powerful processor is recommended. Apart from this, following are some recommended specifications:

3.1 Hardware Requirements

Ram: Minimum 1 GB

Hard Disk: Minimum 10 GB

Processor: Pentium 4 and above

3.2 Software Requirements

Operating system: Ubuntu(recommended), Windows, Mac

Python version: 2.7, 3.4, or 3.5

NLTK module

NLTK data

Other pip modules for python

Chapter 4

Requirement Analysis

Sentiment analysis or opinion mining is the process of determining the emotional tone behind a series of words, used to gain an understanding of the attitudes, opinions and emotions expressed. Sentiment Analysis is an ongoing field of research in text mining field. An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people can, and do, actively use information technologies to seek out and understand the opinions of others. The average human reader will have difficulty identifying relevant sites and accurately summarizing the information and opinions contained in them. Moreover, it is also known that human analysis of text information is subject to considerable biases. Additionally, a large collection of text usually becomes hectic for a human to analyse and deduce the sentiment from them and . These are some scenarios where Text-based sentiment analysis comes handy. We feed in the raw data to analyse, and get the results in seconds.

There are numerous applications of text sentiment analysis:

  • Determining the polarity of user reviews.
  • Customer email response satisfaction.
  • Analysis of questions like – Why customers are not buying a specific product?
  • Politics and socialisation.
  • Analyzing trends, identifying ideological bias.
  • Targeting advertising/messages, gauging reactions.
  • Evaluation of public/voters’ opinions
  • And a lot more.

Chapter 5

Implementation

5.1 Dataset used

SenText uses the movie reviews dataset provied by nltk. The dataset contains 1000 positive reviews in one direcory, and 1000 negative review files in another directory. The training is done on 750 positive review files and 750 negative review files, to- talling an amount of 1500 files for the training data. The testing, however is done on 250 negative review files and 250 positive review files, totalling an amount of 500 training instances.

5.2 Platform used

The whole development is done using python language.

5.3 Result

5.3.1 Naive Bayes Classifier

Test ID Test Condition System Behavior Expected Result Accuracy

T01 Good Boy Positive Positive 80.8%

T02 A ridiculous movie Negative Negative 80.8%

T03 Spread hatred Positive Negative 80.8%

T04 Not a good movie Negative Negative 80.8%

Chapter 6

Screenshots of Project

6.1 Positive Sentiment

6.2 Negative Sentiment

Chapter 7

Conclusion and Future Scope

7.1 Conclusion

I did the analysis of the emotional polarity of text as two-classification problems. I used the tokenised method to represent a text, and then used Naive Bayes classifier to give out the result of classification. Our main operation to the data set was cleaning, Word segmentation, removing stop words, feature selection and clas- sification. The experiment results show that the Naive Bayes classification gave a good accuracy of about 80% in the classification.

7.2 Future Scope

In the subsequent developments, I will use different available classifiers in the literature for text-based sentiment analysis and also perform a comparative performance analysis on the different classifiers available for text-based sentiment analysis.

References

  1. Approaches, Tools and Applications for Sentiment Analysis Implementation; Alessia DAndrea, Fernando Ferri, Patrizia Grifoni, Tiziana Guzzo
  2. Sentiment analysis algorithms and applications: A survey; Walaa Medhat, Ahmed Hassan, Hoda Korashy
  3. A Study and Comparison of Sentiment Analysis Methods for Reputation Evaluation; Anas Collomb, Crina Costea, Damien Joyeux, Omar Hasan, Lionel Brunie
  4. A Survey On Sentiment Analysis Methods and Approach; Ms.A.M.Abirami, Ms.V.Gayathri
  5. The Role of Text Pre-processing in Sentiment Analysis; Emma Haddi, Xiaohui Liu, Yong Shi School of Computer Engineering, KIIT, BBSR 13

Ways of Applying Comparative Analysis in Scientific Research: Descriptive Essay

Ways of Applying Comparative Analysis in Scientific Research: Descriptive Essay

Comparative analysis is done to provide answers to questions about how or why it might be that a system counters to a concern of its variable. Comparative analysis is able to explain why the time frame of a block system would rise if the mass of the block were big. It is argued that “comparative analysis is conducted in order to clarify and gain a better understanding of the causal processes involved in the creation of an event, feature or relationships usually by bringing together variations in the explanatory variable or variables” (Pickvance, 2005). Researchers refer to cross-national research as the study that aims to compare certain issues of phenomena in two or more countries when it comes their different sociocultural settings (Gharawi, 2009).

Moreover, Azarian (2011) argues that ‘comparative research can be traced to a long history that has gained much attention in current research due to globalization and technological advances on cross-national platforms’. Kennet (2004) on the other hand illustrates that the field of comparative social enquiry has expanded intensely since the 1960’s, in terms of the amount of studies that are being undertaken, the variety of approaches used and the countries that are being analysed.

Comparative social policy was formerly regarded as an almost “exotic activity since the 1960’s and after the publication of Espin-aderso’s seminal book ‘The Three Worlds of welfare capitalism it has become increasingly popular” (Midgley, 2013: 182).

When it comes to carrying out comparative analysis, there are a number of reasons for carrying it out there are numerous methods when it comes to doing comparative analysis. Till (1984, p.82) determines four types of comparative analysis which is the Individualizing, the universalizing, the variation-finding and the encompassing. On the other hand, May (1993) provides a four-folded analysis which include the import-mirror view, the difference view, the theory development view and the prediction view (p.117).

The “individualizing comparison contrasts a small number of cases to be able to grasp the peculiarities of each case (Tilly, 1984 p.82). However, according to Fredrickson (1997) this method cannot be thought to be ‘accurately comparative but makes use of comparison in a small aspect of the research’. ‘The universalizing comparison aims to initiate that every instance of an event follows inherently the same rule’ (Till, 1984, p82). This includes the usage of comparison in order to generate important theories with significant generality and significance and to also provide theories that explain the cases that are being studied.

“The variation-finding comparison plan to come up with a concept of variation in the character of intensity of an event and this is carried out by examining systematic differences between instances” (1984 p.82). This helps to compare many types of a single phenomenon to find reasonable differences amongst instances and to come up with a standard of variation in the character or strength of that phenomenon. An example of this can be found on two studies carried out by two researchers (Green, 1997, Modern Jewish Diaspora) and (Moore, 1996, Social Origins of Dictatorship). And finally, the Encompassing comparison puts “contrasting instances at different localities within the same system, on the way to describing their characteristics as a function of their extended relationships to the system as a whole” (Tilly, 1984 p.83).

Researchers might carry out comparative analysis for a number of reasons and they might do this in order to view the “the theoretical postulated relationships in which societal features are key type of independent variable” and doing comparative research design will enable some of these variables to expand (Pickvance, 2001 p.9). Another reason why comparative analysis might be carried out is to explore if a relationship that is being described in a study in one society is also being held in another. By doing this, it intends to initiate societal features clearly into the research design, in order to enable variables that are controlled in a society to differ (p.9).

To explore whether a condition that is fixed for one society is influential or not it is essential to carry out comparative analysis to clarify it. Pickvance (2001) argues that “one of the most frustrating experiences after carrying out a study in one society is to be faced by a critic who says that the reason a relationship between A and P was found in that study is that some other conditions B or C were present as uncontrolled variables and that conclusions are therefore only valid for societies where conditions B and C took particular values” (p.9).

It is argued by Higgins (1986, p24) that “comparative analysis is a methodology, rather than a substantive area of study and should be employed where it can illuminate specific questions and hypotheses”. When it comes to addressing issues in different countries, comparative analysts will engage in using different types of approaches. however, some will focus on “country specific studies detailing provisions but leaving the task of comparative evaluation to the reader” (Alcock, May and Wright, 2012, p.422).

They further illustrate that “while adopting an overt comparative approach, they start from different points on the policy compass and examine particular sectors, programs, problems, user needs or policy processes and attitudes and say that this is most likely in countries with broadly similar socio-economic and political structure” (p.422).

According to Jreisat (1992) that to be able to “describe and establish patterns as generalizations requires an appropriate framework that is capable of dealing with a variety of research challenges”. In order to manage any cross-national comparative research, it is important to understand issues within different countries of interest

A Comparative Analysis of Renaissance Arts

A Comparative Analysis of Renaissance Arts

Introduction

The term renaissance in context of art is considered as paintings, decorative arts and sculptures during the period of European history. The period emerged in a distinct style format in Italian province during the 1400s with parallel developments in science, philosophy, literature and music. This study takes that period into concern and shows a comparative analysis of paintings of Botticelli and Michelangelo, namely the Birth of Venus and David. The study will take style, innovation and creativity as factors for comparing the paintings. In addition, the context of contributions to the Neo-Platonism will also be highlighted for supporting the analysis.

Comparative analysis

Style

Technical aspects vary widely for both the artworks by Botticelli and Michelangelo. The artwork of Botticelli dates back 1485 CA and has been drawn using the tempera on canvas technique[footnoteRef:1]. The composition actually provides a visualisation of the goddess of beauty and love stepping on the land, on the Cyprus Island. Use of Scallop shell indicated purity and perfection as a pearl. Dated back to the 15th century, the painting was made on canvas widely used by Botticelli for decorative works, destined towards noble houses. Inspirations have been considered from classical statues and the use of nakedness reflects the light of glory being perceived from the painting. [1: Campos, Daniela Queiroz, and Maria Bernardete Ramos Flores. ‘Vênus Desnuda: A Nudez Entre o Pudor e o Horror.’ Revista Brasileira de Estudos da Presença 8.2 (2018): 248,276,248A-276A. ProQuest. 18 Nov. 2019.]

On the contrary, aspects are quite different in the case of David created by Michelangelo. Referring to various renaissance artists, Michelangelo preferred art works through concept of sculpting. Within the renaissance art movement, the David was created though stone carving subjecting top fantasy figures[footnoteRef:2]. Through his sculpture, the relationship between Greek mythology and Renaissance art were rectified. The use of herculean physique with sinewy body reflected the power to be shown in future. Use of religious themes and concrete figures were utilised in designing David to display the emotion relations also. [2: Gülzow, Jörg Marvin, Liat Grayver, and Oliver Deussen. ‘Self-Improving Robotic Brushstroke Replication.’ Arts 7.4 (2018) ProQuest. 18 Nov. 2019.]

Overall importance

Both the paintings, Birth of Venus and David, are with varied significances. In context of Birth of Venus, the impressive mythological composition is takes to be centered over introducing naked body of Goddess Venus emerging from seashell. The painting practically embodies a new hope, dawn of civilisation, shifts in social and cultural background along with changes in geopolitics and civilisation rebirth[footnoteRef:3]. The painting readily signifies the renaissance to showcase inner beauty which is a factor for self -development. Aside from the fact, are found constraints such as inspiration for love and passion for the human nature from the painting. [3: Oloidi, Wole. ‘HISTORICAL DOCUMENTATION OF EVENTS THROUGH VISUAL LANGUAGE: THE OCHIGBO’S PAINTINGS IN RETROSPECT.’ Global Journal of Social Sciences 15.1 (2016): 63-71. ProQuest. 18 Nov. 2019.]

Contradicting to the harmonica and physical love concept, the David painting signifies victory and triumph over the evil. A strict focus on balance, harmony and ideal forms are portrayed through this sculpture. The sculpture is with significance portraying Florence symbol with flattering courage and historic preservations with unexpected strength[footnoteRef:4]. A clear definition of renaissance can be found from this sculpture of Michelangelo showcasing masterpiece. It also symbolizes the civil liberties defences which were embodied in the republic of Florentine with independent threats. [4: De la, Puente Luna. ‘Painting the Canvas of the Great Andean Uprising: Recent Research on the Age of Tupac Amaru.’ Latin American Research Review 53.2 (2018): 381-7. ProQuest. 18 Nov. 2019.]

Innovations

Both the artworks can be defined as being innovative however in their own ways. In the David sculpture, Michelangelo has taken the converse concept of the Goliath story. The story depicted David as young and subtle and however clothed. On contrary to this, Michelangelo stated the concept and rather showcased David as a tall and heroic figure with bare body. Competition was raised on account of such innovative style in his period which set new standards. The artist also depicted the use of marbles and sandstones which was actually not commissioned for sculpting. Such challenging behaviour also outlined his innovative behaviour.

On the contrary, the Birth of Venus painting showed a new kind of art signifying iconic renaissances. The masterpiece has taken the innovative concept of Christian themes alongside the blend of classical myths. It reflected the values of Neo-Platonism through the beautiful rendering of classical artwork materials. At a glance, it can be well analysed that this kind of work showcase fluidic brushwork which has been done fluently by Botticelli. The integration of details is helpful to create various levels of meaning. These have been depicted through both the symbolic and allegorical aspect for engaging the viewer.

Creative differences (Neo-Platonism)

Neo-Platonism is termed to be a platonic philosophical strand emerging within the third century AD against the Hellenistic religion and philosophical background. From the neo platonic interpretation, Birth of Venus was a symbol for physical love and also a celestial goddess being an inspiration for intellectual love[footnoteRef:5]. The interpretation readily accepts the painting through arguing that it signified physical beauty contemplation which fuelled the human mind for spiritual beauty comprehension. Inspiration from this concept is taken and similar works by Botticelli are recommended as wedding paintings. The interpretation is considered to recommend suitable behaviours bride and groom. [5: Berlekamp, Persis. ‘REFLECTIONS ON A BRIDGE AND ITS WATERS: FLEETING ACCESS AT JAZIRAT B. ‘UMAR / CIZRE / AIN DIWAR/REFLEJOS SOBRE UN PUENTE Y SUS AGUAS: UN ACCESO RÁPIDO A JACIRAT B. ‘UMAR / CIZRE / ‘AIN DIWAR.’ Espacio, Tiempo y Forma.5 (2017): 107-40. ProQuest. 18 Nov. 2019.]

However, comparing it with Michelangelo’s David, from the Neo Platonic interpretation, the painting was aimed to bring out pre-existent for’s out of the material at hand. Beauty is termed to be a kind of concord and harmony which forms the whole structure from a mixture of fixed numbers. The interpretation states the David painting contemplates the dividing ideas which put humans closer to gods. The uses of the naked inform is symbolic to the reflection of soul beauty. In an overall analysis, from the Neo Platonic interpretation, Birth of Venus indicated physical love to be a component for human behaviour[footnoteRef:6]. On the other hand, David indicated the struggle of soul for freeing itself for achieving the almighty visions. [6: Strijdom, Johan M. ”Senses’: Assessing a Key Term in David Chidester’s Analysis of Religion 1.’ Journal for the Study of Religion: JSR 31.2 (2018): 161-79. ProQuest. 18 Nov. 2019.]

Contribution to Renaissance art

On the surface level, the David painting of Michelangelo can be merely termed to be highly interpretative for Renaissance as the sculpture depicts a heroic naked figure of a male. In reality scenario, the composition holds more politics and complexities. The renaissance concept depicts mastery of illusionism and humanism, which has been clearly depicted from the David sculpture[footnoteRef:7]. Further, the sculpture is also putting stress over the mental nature of David’s victory rather than physical nature. With reference to renaissance art, David has been portrayed as a strong-willed body ready for death fights. [7: Etro, Federico. ‘The Economics of Renaissance Art.’ The Journal of Economic History 78.2 (2018): 500-38. ProQuest. 18 Nov. 2019.]

In a similar context, the Birth of Venus also makes suitable relevance’s to the renaissance. The European movement made a mark after medieval values end. The stress was majorly provided over the importance of the natural world, humanism and individualism[footnoteRef:8]. As the term renaissance literally reflected rebirth, the focus for this painting was shifted on the birth of love. The depiction was in turn was portrayed by a naked beautiful woman enraging life through a tropical scenery landscape. On a thematic sense, the central women are significant for beauty and express the European opinion of beautiful women being epitome of love. [8: Looser, Diana. ‘Viewing Time and the Other: Visualizing Cross-Cultural and Trans-Temporal Encounters in Lisa Reihana’s in Pursuit of Venus Infected].’ Theatre Journal 69.4 (2017): 449-75. ProQuest. 18 Nov. 2019.]

Conclusion

From the above study, it can be readily concluded that both the paintings reflected their own connections with the renaissance and showed varied aspects. From the comparison, it can be concluded that while on one side, the Birth of Venus signified love; on the other hand, David portrayed victory. Both of them are argued with context to Neo-Platonism along with overall structural aspects. In addition, both of the paintings are found to comply with their own style of appearances and histories. The birth of Venus while signified the connection of physical love through artistic paintings, David on another hand indicated mastery of sculpture.

Reference list

Journals

  1. Berlekamp, Persis. ‘REFLECTIONS ON A BRIDGE AND ITS WATERS: FLEETING ACCESS AT JAZIRAT B. ‘UMAR / CIZRE / AIN DIWAR/REFLEJOS SOBRE UN PUENTE Y SUS AGUAS: UN ACCESO RÁPIDO A JACIRAT B. ‘UMAR / CIZRE / ‘AIN DIWAR.’ Espacio, Tiempo y Forma.5 (2017): 107-40. ProQuest. 18 Nov. 2019.
  2. Campos, Daniela Queiroz, and Maria Bernardete Ramos Flores. ‘Vênus Desnuda: A Nudez Entre o Pudor e o Horror.’ Revista Brasileira de Estudos da Presença 8.2 (2018): 248,276,248A-276A. ProQuest. 18 Nov. 2019.
  3. De la, Puente Luna. ‘Painting the Canvas of the Great Andean Uprising: Recent Research on the Age of Tupac Amaru.’ Latin American Research Review 53.2 (2018): 381-7. ProQuest. 18 Nov. 2019.
  4. Etro, Federico. ‘The Economics of Renaissance Art.’ The Journal of Economic History 78.2 (2018): 500-38. ProQuest. 18 Nov. 2019.
  5. Gülzow, Jörg Marvin, Liat Grayver, and Oliver Deussen. ‘Self-Improving Robotic Brushstroke Replication.’ Arts 7.4 (2018) ProQuest. 18 Nov. 2019.
  6. Looser, Diana. ‘Viewing Time and the Other: Visualizing Cross-Cultural and Trans-Temporal Encounters in Lisa Reihana’s in Pursuit of Venus Infected].’ Theatre Journal 69.4 (2017): 449-75. ProQuest. 18 Nov. 2019.
  7. Oloidi, Wole. ‘HISTORICAL DOCUMENTATION OF EVENTS THROUGH VISUAL LANGUAGE: THE OCHIGBO’S PAINTINGS IN RETROSPECT.’ Global Journal of Social Sciences 15.1 (2016): 63-71. ProQuest. 18 Nov. 2019.
  8. Strijdom, Johan M. ”Senses’: Assessing a Key Term in David Chidester’s Analysis of Religion 1.’ Journal for the Study of Religion: JSR 31.2 (2018): 161-79. ProQuest. 18 Nov. 2019.

Comparative Analysis and SWOT Analysis: Dolphin Nautilus Versus Polaris 9550 Pool Cleaner

Comparative Analysis and SWOT Analysis: Dolphin Nautilus Versus Polaris 9550 Pool Cleaner

Pool cleaner

Pool cleaner models are proposed by companies to collect sediments and debris from our swimming pools and reduce the human intervention. Dirty pool can cause health problems so cleaning your pools is necessary. These models provide us with ease and can be used for both small-sized and large-sized pools. There are many types of pool cleaner models which include manually-powered as well as battery-powered automated cleaners.

Maytronics is a USA company that designed great products and ensure safety standards. However, I will be discussing two models proposed by Maytronics Company in USA viz. Dolphin Nautilus and Polaris 9550.

Dolphin Nautilus CC Plus pool cleaner

Dolphin Nautilus CC Plus is latest automatic robot cleaner that deliver industry leading performance. For your dirty pool, you need a robotic pool cleaner that help make your pool shine. This Dolphin model is different from other robot pool cleaners because of its special multi-cleaning functionality. Dolphin nautilus use most efficient route to clean and is one of the easier cleaner to operate, designed by Dolphin products of robotic Cleaner. It cleans no matter what size, shape and type of your pool. The pros of this model is that it cleans floors as well as walls. It also provides smart control.

Another great characteristic of Dolphin Nautilus is its light weight with high quality brushes for efficient cleaning of pool. Single button operation in this pool cleaner is best feature for beginners of pool cleaner. Four different types of filters are there to clean a pool. This model uses specific filter to eliminate small leaves, debris and algae. This model of Dolphin is designed with powerful motors that’s why energy reduced to eighty-seven percent. It works fairly on swimming pools. The weight of this model is only 13 lbs and capacity to clean 16 in. depth.

Polaris 9550 pool cleaner

Polaris 9550 is another outstanding robotic pool cleaner in a market. This great machine take the stress out of keeping your pool water crystal-clear. The top characteristic of Polaris 9550 is that it can control by using motion sensor remote. Seven day programmable cycle is also instructed in this model for the comfort for their users. Cleaning made convenient by using this automatic robotic cleaner. This model is little expensive because of its variety of excellent features.

Polaris 9550 is a four wheel drive technology and has unique function that it works with all types of pool surfaces. Another special mode is that it cleans the waterlines where heavy amount of debris gathered. Polaris is available in different sizes in market. It maintains your pool purity and spotlessness. Quick access and top cleaning capabilities make this cleaner a great value. Its weight is 21 lbs and can clean 18.9 in. depth. Powerful dual scrubbing brushes that capture large particles is excellent feature in it.

SWOT Analysis

1. Dolphin Nautilus CC Plus pool cleaner

Strengths

  • Dolphin Nautilus is affordable, fast and smart cleaning machine. This pool cleaner comes with two year warranty. This model has many types of filters to clean a pool.

Weaknesses

  • Dolphin does not have remote control to drive the cleaner. This is a weakness of this model and can decrease its efficiency and worth.

Opportunities

  • The trend of automated tool cleaners is increasing today gradually. Thus it is an opportunity for this model to increase its prices and earn more profits by designing excellent pool cleaners.

Threats

  • The phenomenal advancements made by Polaris 9550 pool cleaner is a threat to this model. It can enhance the threats of this model and decrease its demand in a market.

2. Polaris 9550 pool cleaner

Strengths

  • The motion sensing remote of this model is its main strength. It has led to an increased efficiency in cleaning and increases the demand of this pool cleaner. Booster pump is not required for this model.

Weaknesses

  • The main weakness this pool cleaner is that it consumes so much time (1.5 to 2.5 hours) , therefore the customer avoid using this model. This is the main weakness of this model which decreases its demand and economic growth.

Opportunities

  • Today, motion sensing remote in a pool cleaner is in continuous demand. This is a great opportunity for this model as it has unique and exceptional motion sensing remote. In this way, the opportunities for the progress of this model are increased which increase its financial growth and worth.

Threats

  • The main threat for this model is the advancement and progress made by the Dolphin Nautilus CC Plus pool cleaner model. The progress of Dolphin Nautilus CC pool cleaner model has reduced the demand of this model. This is a great threat for Polaris 9550 pool cleaner model.

Comparative Analysis

A brief comparative analysis of both of these models shows that there are considerable differences between both of these models. Dolphin Nautilus has no Sensor remote control system and automatic timer for cleaning cycle whereas Polaris has programmable timer for cleaning purpose. Dolphin Nautilus CC Plus Pool Cleaner model is light weight whereas Polaris 9550 pool cleaner model is heavy and uses a four wheel machine. Dolphin Nautilus CC Plus has powerful motors whereas the Polaris 9550 model uses motion sensing remote.

Polaris uses different cleaning cycle modes like floor and water line to accumulate a heavy amount of debris while Dolphin has only few modes of cleaning that can capture small particles of debris. One of the weak point in Dolphin is caddy is not included in it and have to purchased separately while in Polaris cleaner caddy is already built-in.

Filled filter indicator is another interesting characteristic of Polaris 9550 pool cleaner to let you inform when it’s full and needs to be emptied. Large bag for gathering of rubbish is fitted in it. Dolphin Nautilus is not very good on steps while Polaris can go with all types of routes. Another great service for Polaris 9950 is two year warranty, so if there is any issue in the machine it can be repair by the company. This model is available in different colors viz. Blue and white shell colors. Long floating cable of 70 feet is used by Polaris 9550 whereas Dolphin Nautilus use slightly shorter floating cable of 60 foot.

Conclusion

After a comprehensive comparative analysis, I have concluded that Polaris 9550 Pool Cleaner Model is the best model introduced by the Maytronics. This is because it has exceptional features and tools. The leading feature of this model is its motion sensing remote to control its route. Most of the people always want these features in a pool cleaner.

Vortex Vacuum technology in Polaris enables it to more efficiently remove debris and dirt. For unusual shaped, large pools and above ground pools, Polaris 9550 is a great choice.Due to its four-wheel machinery, it doesn’t get struck on floor and go over objects easily and can deal with many kinds of debris in different types of pool.

Thus it is the perfect and advanced model for pool cleaning. Dolphin Nautilus model also has exceptional qualities and features, but Polaris 9550 has improved features than Dolphin Nautilus. Polaris 9550 is like a dream pool cleaner with affordable budget. Therefore, in my opinion, Polaris 9550 is the leading pool cleaner model and should be used widely by the people.

Feasibility Analysis of Highway Sector in India through Comparative Analysis of Concessionaire Models

Feasibility Analysis of Highway Sector in India through Comparative Analysis of Concessionaire Models

Abstract—

Investment in government infrastructure projects plays an important role in effective advancement and development of the country. However large scale highway development projects increases financial and budgetary burden on the government bodies. Therefore Public private Partnership (PPP) was introduced which gives an opportunity to private sector to invest in infrastructure project. Willingness of private sector to participate in investment of infrastructure projects depends upon financial and economic analysis of that project, which draws the viability in terms of benefits in near future as well as throughout the project lifecycle. This paper aims to study various concessionaire models available for financing a proposed national highway in India by carrying out the feasibility analysis of the project. It is necessary to compare all the models for highway project financing which will provide maximum returns on investment over a shortest period of time. Net Present Value (NPV) method of investment analysis is used where project will be compared in terms of present value of future returns, Internal Rate of Return (IRR) and payback period of the project. At the end through comparative analysis of concessionaire models, BOT annuity + VGF showed highest Internal Rate of Return 14.90% within concessionaire period of 30 years.

Keywords— Public Private Partnership (PPP), Net Present Value (NPV), Internal Rate of Return (IRR).

Introduction

India has prerequisite of project of worth Rs. 50 trillion and role of private segment investment has picked up part of significance to have economical improvement of nation.There is an enormous interest on public infrastructure and development worldwide though the development spending plan of any nation is constantly restricted [1].

In India, the road projects are awarded using one of Model such as Built, Operate & Transfer (BOT) Toll Model, BOT Annuity Model &Engineering, procurement & construction (EPC) Model. Also the new advanced version of model concession agreement is introduced which is HAM (Hybrid Annuity Model). Previously, the money related and hierarchical assets of open authorities assumed an imperative job in financing expressway foundation projects [2]. In this study present concessionaire models in India were studied. The selection of appropriate concessionaire model is crucial for successful completion of project. Concessionaire modeling plays a primitive role in evaluation of projects for making project financing decisions by both the lenders and equity investors. In project finance, the funding agencies look into the expected future cash flows in relation to the amount of the initial investment while making the investment decision. Equity investors used financial model to evaluate the returns from the project in order to ascertain their adequacy. On the other hand, financial model was used by lenders to know the level of cover for their loans and the timeliness of project debt service payments.

The Net Present Value (NPV) method of investment analysis was utilized for selection of concessionaire model. NPV method uses the concept of discounted cash flow analysis for the evaluation. The NPV strategy as a project evaluation or capital budgeting procedure demonstrates how an investment in project influences organization investors’ riches in present worth terms [3]. The typical steps in discounted cash flow analysis involve:

  1. Future cash flows based on toll revenue.
  2. Computing IRR for discounting returns.
  3. Computing the present worth of the expected future returns.
  4. Compare whether the project is worth more than its cost.

The numerous parameter required in NPV method were identified which are required for decision making of concessionaire. The comparative analysis for different concessionaire model was performed based on results obtained with NPV model. The simulation of parameters was developed over the concession period. The model was selected with maximum returns on investment over concession period. The model selected based on its feasibility analysis for a new highway project that will be undertaken.

Road investment decision making parameter

The various parameters for decision making are as follows:

  • A. Net present value (NPV)
  • B. Internal rate of return (IRR)
  • C. Viability Gap funding (VGF)
  • D. Payback Period

A. Net Present Value (NPV)

Usually NPV is used for capital budgeting and planning for the investment to study the effectiveness of the project. All possible values of cash flows expected to occur over life span of project including positive as well as negative were considered under NPV. (Changed just go through once)

NPV of project =

Ct is the cash flow at the end of year t, n is the life of the project and r is the discount rate. The NPV represents the benefit above and over the compensation for time and risk.

Hence decision rule associated with the net present value criterion is accept the project which is having positive NPV and reject the project which is having negative NPV. (Need to reframe sentence is tooo lengthy )

B. Internal rate of return (IRR)

Internal rate of return of a project is the discount rate which makes its NPV equal to zero. Put differently, it is the discount rate which equates the present value of future cash flows with the initial investment. It is the value of r in the following equation.

Investment =

Where Ct is the cash flow at the end of the year t, r is the internal rate of return (IRR), and n is the life of the project. In the NPV calculation we assume that the discount rate (cost of capital) is known to determine the NPV. In the IRR calculation we set the NPV equal to zero and determine the discount rate that satisfies this condition.

Generally speaking, the higher a project IRR, the more desirable is to undertake the project.

IRR represents the time adjusted earnings over project life. It is that rate that equates the present value of cash inflows to the present value of cash outflows of the project. Or in other words, the discount rate that set sets NPV of cash flows to zero. Direct cost of project and benefits are calculated by investor’s point of view in IRR.

C. Viability Gap Funding (VGF)

Viability gap funding implies one time award or grant, gave to help infrastructure projects which are economically suitable but yet miss the mark of financial viability. The lack of financial viability for most part emerges on account of long construction periods and the inability to increase user charges into commercial levels. Infrastructure project likewise include different externalities which are not sufficiently shrouded in direct financial returns to the project sponsor.

Government of India has notified a scheme for viability gap funding to infrastructure projects that are to be undertaken through public private partnership. The quantum of VGF provided under this scheme is in the form of capital grant at the stage of project construction.

Designation of Cess Revenues for Viability Gap Funding

The average viability gap funding has been assumed as 30% of the project cost. The maximum in selected cases can go up to 40% of the project cost.

Allocation of cess revenues by the Government for funding the annual plan outlays of NHAI may be split into two parts viz. (a) PPP component, and (b) EPC, O&M and Misc. component.

D. Payback period

The payback period is the period of time required to recoup the underlying money cost on the project. If the annual cash inflow is a constant sum, the payback period is simply the initial outlay divided by annual cash inflow. According to payback criterion, the shorter the payback period, the more desirable is the project. Firms using this criterion generally specify the maximum acceptable payback period. If this is n years, projects with payback period n or less are deemed worthwhile and projects with payback period exceeding n years are considered unworthy. (Need to change)

Data collection

Traffic flow or volume is measured in terms of number of vehicles per unit time. The common units of time are day and hour. Thus the flows are measured in terms of vehicles per day or vehicles per hour. Daily traffic volume is denoted by the term ADT or AADT. ADT (Average Daily Traffic) is the value when traffic counts are taken for a limited period of say 3-7 days, and the daily average determined. AADT (Annual Average Daily Traffic) is the value when traffic counts are taken for all the 365 days of the year and the daily average determined.

Since Indian traffic is heterogeneous, the traffic is converted in terms of passenger car units (PCUs).

Table i

Passenger car unit (pcu)

Source: irc 106:1996

Vehicle type

Equivalence factor

Fast Vehicles

  1. Motor Cycle or Scooter 0.5
  2. Passenger Car, Pick-up Van or Auto-rickshaw 1
  3. Agricultural Tractor, Light Commercial Vehicle 1.5
  4. Truck or Bus 3
  5. Truck-trailer, Agricultural Tractor-trailer 4.5 Slow Vehicles
  6. Cycle 0.5
  7. Cycle-rickshaw 2
  8. Hand Cart 3

Traffic volume data for project

The annual average daily traffic volume is collected and increased for traffic growth by 5 % each year as per guidelines given in “Financing Plan of National Highways”.

Table ii

Average annual daily traffic (aadt)

Type of vehicle

2 Wheeler

3 Wheeler

Car/ Jeep

LCV

Mini Bus

Trucks

(2-Axel)

Private Bus

Govt. Bus

Trucks

(3-Axle)

MAV

PCU factors

0.5

1

1

1.5

1.5

3

3

3

3

4.5

Growth for existing traffic

5%

5%

5%

5%

5%

5%

5%

5%

5%

5%

Growth for proposed traffic

5%

5%

5%

5%

5%

5%

5%

5%

5%

5%

AADT

3,338

1,211

267

251

320

55

85

180

195

280

Toll revenue

WPI termed as whole sale price index is an index that track and measures the changes in the price of goods before the retail level(Retail level is level in which goods are sold in bulk and traded between business instead of consumers). WPI is expressed in percentages of ratio. It indicates average change in price which is seen as an indicator of country’s inflation level.

Table iii

Base rate for different class of vehicles

Source: the gazette of india, part 2, section 3, sub section 1

Class of Vehicle

Car, Jeep, Van or LMV

LCV, LGV, Mini bus

Truck, Bus

3 Axle

4 to 6 axle / HCM

O/S vehicles

Base Rate (%)

0.65

1.05

2.2

2.4

3.45

4.2

The base rate is increased by 3% for every year.

Example for evaluating Toll rates is given below:

If base rate for 2008-09 for car, jeep and van is 0.6695, and WPI for 2007 is 208.70 and for 2008 is 218.58, then base rate for toll fee working is calculated as below:

Financial plan for national highway project – a case study

Modes of delivery for highway projects:

In this research, following modes of delivery of project are identified in order of priority:

  • A. BOT (Toll) without VGF
  • B. BOT (Toll) With VGF
  • C. BOT (Annuity)
  • D. Hybrid annuity model (BOT Annuity plus VGF)
  • E. EPC

All highways which are to be tolled should adhere to the BOT (Toll) mode in accordance with the extant framework approved by CoI/ Cabinet, especially a cap of 40% on the grant element.

Data and Assumptions

The case study of national highway of project was considered for financial analysis of project. The construction of highway takes number of years and similarly maintenance and operation is carried out over period of time. Phase cost of project was calculated up to 328.71 corers. Construction cost in first year is 40% of phase cost and 60% of phase cost in second years. It was assumed that annual maintenance is 1% of phase cost of project and periodic maintenance was assumed to 6 % of phase cost of project. The routine operation and maintenance cost is 3.29 crores and periodic maintenance is 19.72 crores which is calculated over period of 5 years.

The costs of construction, annual maintenance and periodic maintenance are added with inflation of 5% over the concession period for each year. It was also assumed that 5% of yearly toll revenue will be spent on operations of toll plaza.

In respect of annuity projects, IRR has been considered @ 15% per annum for the purpose of calculation of annuity payments as per guidelines given “Financial plan for national highway development programme”.

A. BOT toll without VGF

In initial case it was assumed that no VGF will be provided by government. The cash flow was generated over the concession period of 30 years and IRR was calculated for same. The cash outflow includes costs like cost of construction, annual maintenance, and periodic maintenance.

Option with IRR of 14.90 % is considered viable for financial planning of the project.

Table iv

Cash outflow for bot model without vgf

Year

Construction cost (Cr)

VGF

Inflation

Current Construction Cost (Cr)

Current Annual Maintenance (Cr)

Current Periodic Maintenance (Cr)

Total Outflow (Cr)

5%

With inflation

With inflation

2019

2020

1.22

0.00

4.00

0.00

4.00

2020

2021

1.28

0.00

4.20

0.00

4.20

2021

2022

1.34

0.00

4.41

0.00

4.41

2022

2023

1.41

0.00

4.63

0.00

4.63

2023

2024

1.48

0.00

4.86

29.14

34.00

2024

2025

1.55

0.00

5.10

0.00

5.10

2025

2026

1.63

0.00

5.35

0.00

5.35

2026

2027

1.71

0.00

5.62

0.00

5.62

2027

2028

1.80

0.00

5.90

0.00

5.90

2028

2029

1.89

0.00

6.20

0.00

6.20

2029

2030

1.98

0.00

6.51

39.05

45.56

2030

2031

2.08

0.00

6.83

0.00

6.83

2031

2032

2.18

0.00

7.18

0.00

7.18

2032

2033

2.29

0.00

7.53

0.00

7.53

2033

2034

2.41

0.00

7.91

0.00

7.91

2034

2035

2.53

0.00

8.31

0.00

8.31

2035

2036

2.65

0.00

8.72

52.33

61.05

2036

2037

2.79

0.00

9.16

0.00

9.16

2037

2038

2.93

0.00

9.62

0.00

9.62

2038

2039

3.07

0.00

10.10

0.00

10.10

2039

2040

3.23

0.00

10.60

0.00

10.60

2040

2041

3.39

0.00

11.13

0.00

11.13

2041

2042

3.56

0.00

11.69

70.13

81.82

2042

2043

3.73

0.00

12.27

0.00

12.27

2043

2044

3.92

0.00

12.89

0.00

12.89

2044

2045

4.12

0.00

13.53

0.00

13.53

2045

2046

4.32

0.00

14.21

0.00

14.21

2046

2047

4.54

0.00

14.92

0.00

14.92

2047

2048

4.76

0.00

15.66

93.98

109.64

2048

2049

5.00

0.00

16.45

0.00

16.45

2049

2050

5.25

0.00

17.27

0.00

17.27

2050

2051

5.52

0.00

18.13

0.00

18.13

2051

2052

5.79

0.00

19.04

0.00

19.04

TABLE V

CASH INFLOW FOR BOT MODEL WITHOUT VGF

Year

Annuity

Toll Revenue (Cr)

Toll Collection Charges

Total inflow (Cr)

5% of toll revenue

2019

2020

0.00

6.74

0.34

6.40

2020

2021

0.00

7.48

0.00

7.48

2021

2022

0.00

8.30

0.42

7.89

2022

2023

0.00

8.99

0.45

8.54

2023

2024

0.00

10.00

0.50

9.50

2024

2025

0.00

11.04

0.55

10.49

2025

2026

0.00

12.31

0.62

11.69

2026

2027

0.00

13.28

0.66

12.61

2027

2028

0.00

14.82

0.74

14.08

2028

2029

16.31

0.82

15.49

2029

2030

18.05

0.90

17.15

2030

2031

19.87

0.99

18.87

2031

2032

21.95

1.10

20.85

2032

2033

24.58

1.23

23.35

2033

2034

27.02

1.35

25.66

2034

2035

29.70

1.49

28.22

2035

2036

32.67

1.63

31.04

2036

2037

36.50

1.83

34.68

2037

2038

40.01

2.00

38.01

2038

2039

44.53

2.23

42.30

2039

2040

49.44

2.47

46.97

2040

2041

54.84

2.74

52.10

2041

2042

60.71

3.04

57.67

2042

2043

67.08

3.35

63.73

2043

2044

73.99

3.70

70.29

2044

2045

81.87

4.09

77.78

2045

2046

90.34

4.52

85.82

2046

2047

100.50

5.02

95.47

2047

2048

111.44

5.57

105.87

2048

2049

123.69

6.18

117.51

2049

2050

137.20

6.86

130.34

2050

2051

151.74

7.59

144.16

2051

2052

168.01

8.40

159.61

Net cash flow was calculated with difference of outflow minus inflow. Then IRR was calculated by setting NPV to zero, which was found to be 3.73%.

Table VI

NPV and IRR for bot model without VGF

From to

Years

Net cash flow (Cr)

NPV (Cr)

Considering

Interest=3.73%

2019

2020

2

-2.40

-2.15

2020

2021

3

-3.29

-2.84

2021

2022

4

-3.48

-2.90

2022

2023

5

-3.91

-3.14

2023

2024

6

24.49

18.95

2024

2025

7

-5.39

-4.02

2025

2026

8

-6.34

-4.56

2026

2027

9

-6.99

-4.85

2027

2028

10

-8.18

-5.47

2028

2029

11

-9.29

-5.99

2029

2030

12

28.41

17.65

2030

2031

13

-12.04

-7.21

2031

2032

14

-13.68

-7.90

2032

2033

15

-15.82

-8.81

2033

2034

16

-17.75

-9.53

2034

2035

17

-19.91

-10.30

2035

2036

18

30.01

14.97

2036

2037

19

-25.52

-12.27

2037

2038

20

-28.39

-13.16

2038

2039

21

-32.20

-14.39

2039

2040

22

-36.37

-15.67

2040

2041

23

-40.97

-17.01

2041

2042

24

24.15

9.67

2042

2043

25

-51.45

-19.86

2043

2044

26

-57.40

-21.36

2044

2045

27

-64.25

-23.04

2045

2046

28

-71.62

-24.76

2046

2047

29

-80.55

-26.85

2047

2048

30

3.77

1.21

2048

2049

31

-101.06

-31.31

2049

2050

32

-113.07

-33.77

2050

2051

33

-126.03

-36.28

2051

2052

34

-140.57

-39.02

IRR

3.73

B. BOT toll with VGF

In This case it was assumed that VGF will be provided by NHAI. The VGP of 40% of total phase cost is given by NHAI in two stages in 2016-17 and 2017-18. The cash flow was generated over the concession period of 30 years and IRR was calculated for same. The cash outflow includes costs like cost of construction, annual maintenance, and periodic maintenance.

Option with IRR greater than 14 % is considered viable for financial planning of the project.

Net cash flow is calculated with difference of outflow minus inflow. Then IRR was calculated by setting NPV to zero, which was found to be 5.73 %.

C. BOT with annuity

Highway projects which are not amenable to BOT (Toll) mode, including projects which are not to be tolled under Government policy, should be undertaken on BOT (Annuity) mode. In this case it was assumed that NHAI would start payment to concessionaire as soon as project was started. The annuity is provided on total phase cost i.e. on 328.71 Cr.

The annuity is calculated using capital recovery of economic analysis:

A= 98.06 Cr

The annuity of 98.06 is provided over the period of 5 years from the commencement of project. The cash flow was generated over the concession period of 30 years and IRR was calculated for same. The cash outflow includes costs like cost of construction, annual maintenance, and periodic maintenance.

Net cash flow is calculated with difference of outflow minus inflow. Then IRRwas calculated by setting NPV to zero, which was found to be 13.94 %.

D. BOT with annuity plus VGF

Highway projects which are not amenable to BOT (annuity) mode, including projects which are not to be tolled under Government policy, should be undertaken on BOT (Annuity plus VGF) mode .In this case it was assumed that NHAI would start payment to concessionaire as project is started as well as VGF of 40 % of phase cost which was 131.48 Cr is provided in two stages. Annuity is provided over period of 5 years on remaining 60% phase cost which is 197.23 Cr.

The annuity was calculated using capital recovery of economic analysis:

A= 58.83crores

The cash flow was generated over the concession period of 30 years and IRR was calculated for same. The cash outflow includes costs like cost of construction, annual maintenance, and periodic maintenance.

Net cash flow was calculated with difference of outflow minus inflow. Then IRR was calculated by setting NPV to zero, which was found to be 14.90 %.

Table VII

Summary sheet of IRR and NPV for different cases of financial model

S.N.

Option

Project cost (Cr)

Grant 40% (Cr)

Annuity for period of 5 years (Cr)

IRR (%)

NPV (Cr)

Concession period

1 BOT-Toll without VGF

328.71

3.73

-0.18

30 years

2 BOT-Toll with VGF

328.71

131.38

5.73

-0.21

30 years

3 BOT- Annuity

328.71

98.06

13.94

-0.05

30 years

4 BOT-Annuity+VGF

328.71

131.38

58.83

15

0.01

30 years

Results

The results of four models are summarized as below: Options 1 and 2 have NPV closer to zero, but do not satisfy the minimum criterion of IRR which is 15%. From the result illustrated in Table VII,it can be seen that IRR becomes maximum in BOT (Annuity + VGF).It can be seen that corresponding NPV becomes zero, also the IRR value is greater than 14% which makes project financially viable. So it can be clearly suggested that option provided with payment given by NHAI in form of annuity and NPV of 40% can be more financially stable as compared to other option.

Conclusion

The concessionaire models that have been used in current scenario in India are studied. The cash flows for different cases are created which affects different concessionaire models. The financial viability of highway project through comparative analysis of different concessionaire models is studied. On basis of study conducted on case study of highway project’s for financial feasibility, it has been concluded that, out of different models of highway finance, the most suitable model to get return on investment is BOT (Annuity + VGF) with Internal Rate of Return of 14.90% and concessionaire period of 30 years. The proposed model resembles Hybrid Annuity Model for financing the project.

References

  1. Xueqing Zhang, Shu Chen, (2012). A Systematic Framework for Infrastructure Development through Public Private Partnerships. IATSSR, 00046.
  2. Tanaphat Jeerangsuwan, Hisham Said, Amr Kandil, Satish Uk kusuri, (2014). Financial Evaluation for Toll Road Projects Considering Traffic Volume and Serviceability Interactions. ASCE, Volume 20.
  3. Surendranath Rakesh Jory, Abdelhafid Benamraoui, Devkumar Roshan Boojihawon, Nnamdi O. Madichie, (2016). Net Present Value Analysis and the Wealth Creation Process: A Case Illustration. The Accounting Educators’ Journal, Volume XXVI.
  4. Buen O. and Mantilla B.J. O., (2000). PPPs for Road Development in Mexico. ASCE,41-51.
  5. Singh L. B. and Kalidindi S.N., (2006). Traffic revenue risk management through annuity model of PPP road projects in India. International Journal of Project Management, 24(7), pp.605-613.
  6. Boeing Singh and Kalidindi S.N., (2009). Criteria influencing debt financing of Indian PPP road projects: a case study. Journal of Financial Management of Property and Construction, 14(1), pp.34-60.
  7. El- Sayegh, S. M., Mansour M. H., (2015). Risk assessment and allocation in highway construction projects in the UAE. Journal of Management in Engineering, 31(6), p.04015004.
  8. Tokiwa N., Queiroz C., (2017). Gurantees and other support options for PPP road projects: Mitigating the Perception of Risks. Advances in public private partnership, 624-632.
  9. IRC: SP: 73-2015, Manual of specifications and standards for two laning of highways with paved shoulders.
  10. IRC: SP: 84-2014, Manual of specifications and standards for four laning of highways through Public private partnership.
  11. IRC: SP: 30-2009, Manual on economic evaluation of Highway projects in India.
  12. IRC: SP: 84-2014, Manual of specifications and standards for four laning of highways through Public private partnership.
  13. IRC: SP: 30-2009, Manual on economic evaluation of Highway projects in India.

Comparative Analysis of Extracellular Polysaccharide Production by Dairy Milk Derived Lactic Acid Bacteria Grown on De Man, Rogosa, And Sharpe Medium

Comparative Analysis of Extracellular Polysaccharide Production by Dairy Milk Derived Lactic Acid Bacteria Grown on De Man, Rogosa, And Sharpe Medium

The diverse microbial flora found in dairy cow milk contributes to beneficial effects to human health. A group of microorganisms known as Lactic Acid Bacteria (LAB) are most commonly found and used in fermented dairy products. These bacterial strains embrace the idea of good nutrition by assisting with health maintenance, aiding in the prevention, control and treatment of many diseases. Heteropolysaccharides (HePS) produced by LAB plays an important role in the rheology, texture, body, and “mouthfeel” of fermented milk. HePS such as D-galactose, D-glucose, and L-rhamnose were tested under various conditions for comparative analysis of polysaccharide production efficiency. LAB strains were identified through biochemical tests such as gram stain, catalase, and motility tests. Even further, these strains were cultured in different temperature, pH, and incubation time. (add results).

Introduction

This M.S. thesis will have an emphasis on lactic acid bacteria and their ability to produce extracellular polysaccharides under various conditions. The objectives addressed include 1) Determine LAB strains found in commercially sold dairy milk 2) Identify factors capable of impacting polysaccharide secretion. Addressing these focus points will allow further studies on these strains of bacteria capable of promoting human wellness.

1.1 Milk microbiome

Milk itself is known to contain several types of bacteria with one commonly being lactic acid bacteria (LAB). These group of bacteria are characterized as being Gram- positive, non-sporulating, anaerobic or facultative aerobic cocci or rod-shaped microorganisms. They produce lactic acid as one of the main fermentation products of the metabolism of carbohydrates. In addition to LAB, many other microorganisms are present in milk as it provides high nutrient content such as proteins, fats, carbohydrates, vitamins, minerals, and essential amino acids (10). This provides an ideal environment for the growth of many microbes. It is generally accepted that LAB is the dominant population in milk including Lactococcus, Lactobacillus, Leuconostoc, Streptococcus, and Enterococcus LAB genera (10).

Strains of non-LAB genera are also present in dairy milk including various yeasts and molds. In some cases, milk may be contaminated with microbial pathogens leading to severe illness. One prime example are bacterial strains, known as psychrotrophs, that are capable of surviving in cold storage consisting of Pseudomas and Acinetobacter spp (10). These types of bacteria can proliferate during refrigeration and produce extracellular proteins, such as lipases and proteases, negatively impacting the quality of milk resulting in spoilage (10). Another common example is Helicobacter pylori. These strains of bacteria can be found in raw sheep’s milk or in other contaminated milk products. They are responsible for cancers of the digestive tract known as gastric cancer. These microaerophilic spiral-shaped microbes deploy several mechanisms in surviving the stomach’s acidic environment including enabling their flagella to colonize human gastrointestinal tract, hydrolyzing urea and releasing ammonia with a urease enzyme to neutralize gastric acid, and adhering the gastric epithelium through receptor-mediated adhesion (20).

Several processing techniques such as thermization, Low Temperature Long Time (LTLT) pasteurization, High Temperature Short Time (HTST) pasteurization, sterilization, ultra-high temperature treatment, ultraviolet treatment, microwave treatment, membrane processing and microfiltration are used to treat raw milk for safe human consumption (13).

In contrast, dairy milk microorganisms can provide beneficial contributions to human health aiding in digestion or by reducing allergies (10). They are often defined as probiotics due to their assistance in health maintenance through treatment of diseases. For example, studies have shown that dietary supplementation of probiotic Lactobacillus reuteri to both aged humans and mice have shown to assist in younger appearance compared to their untreated counterparts (17). In addition, consumption of L. reuteri have shown to accelerate healing of skin wounds by up-regulating pituitary neuropeptide hormone oxytocin (17). Moreover, bacteria such as Propionibacterium freudenreichii derived from complex dairy products were shown to induce apoptosis among human colon cancer cell lines. While co-cultured with these cancer cells, P. freudenreichii secretes active compounds to trigger intrinsic mitochondrial apoptotic pathway in the human colorectal cancer cells (18). The release of these anti-carcinogenic metabolites were identified to be short chain fatty acids (SCFAs) and other unknown compounds (18). These studies have led to further investigations on probiotics and their ability to cease cancer development. Milk from cows, sheep, goats and humans all are source of microorganism that play a number of roles in the health and food industry.

1.2 Lactic acid bacteria

As mentioned, LAB is the predominate population in dairy milk. They can be naturally present in milk, cheese, meat, beverages, vegetables and could be isolated from soil, lakes, intestinal tract of animals and humans (6). In the food industry, LAB is highly utilized as a major application in food fermentation. They are grouped as Homofermenters or Heterofermenters in which the Homofermenters produce lactic acid as their main product of fermentation of glucose and Heterofermenters produce lactic acid, carbon dioxide, acetic acid, and ethanol from the fermentation of glucose (6). Hence, these group of bacteria are recognized for their fermentative ability to enhance food safety and supplying health benefits. These microorganisms through metabolic activities, including lipolysis and proteolysis, can produce organoleptic properties of products such as aroma and flavor compounds contributing to overall texture of fermented food (14).

The probiotic properties of LAB play an important role in maintaining undesirable pathogens and harmful bacteria. They have antibacterial aptness against Gram-negative and positive bacteria such as Esherichia coli, Pseudomas aeruginosa, and Staphlococcus aureus (12). In addition, LAB is resistant to lysozyme, gastric acid, gastrointestinal juice, and bile salts (12). Today, these bacteria are gaining attention medically and environmentally as potential tools for pathogenic treatment. LAB also stimulate a wide range of activities of the immune system of the host including the prevention of diarrhea caused by antibiotic treatment or viral infections, vitamin production, and reduction of cholesterol levels in the blood (14).

Recent observations of metagenomic data supported that LAB are a part of the microbiomes of humans and other animals. LAB are classified as gram-positive, non-spore forming bacteria that are microaerophilic or anaerobic. They generally have a low GC content (

Comparative Analysis of Substitutive 3D Models Fragile Watermarking Techniques

Comparative Analysis of Substitutive 3D Models Fragile Watermarking Techniques

Abstract—

Due to the importance of multimedia data and the urgent need to use it in many fields such as industrial, medical and entertainment, protecting them becomes an important issue. Digital watermarking is considered as an efficient solution for multimedia security as it preserves the original media content’s as it is. 3D Fragile watermarking aims to detect any attacks to 3D graphical models to protect the copyright and the ownership of the models. In this paper, we present a comparative analysis between two substitutive 3D fragile watermarking algorithms. The first based on adaptive watermark generation technique using the Hamming code, while the other uses Chaos sequence for 3D models fragile watermarking in the spatial domain. The study uses different assessment measures to show the points of strength and weaknesses of both methods.

Keywords—Adaptive watermarking, hamming code, Chaos sequence, tampering detection, authentication

Introduction

Information security refers to the protection of information from unauthorized access, use, modification, or destruction to achieve confidentiality, integrity, and availability of information. There are two types of information security; information hiding (Steganography) or information encryption (Cryptography). Encryption is the science of protecting information from unauthorized people by converting it into a form that is non-recognizable by its attackers. Information hiding embeds a message (watermark) over a cover signal such that its presence cannot be detected during transmission. There are two categories of information hiding: steganography and watermarking. The main goal of steganography is to protect the message itself and hide as much data as possible in the cover signal, while the goal of the watermarking is to protect the cover signal by hiding data (watermark) in it.

Watermarking may be used for a wide range of applications, such as Copyright protection and content authentication. There are three types of watermarking according to the goal to be achieved; robust watermarking, fragile watermarking and semi-fragile watermarking. The aim of the robust watermark is to protect the ownership of the digital media and keep the embedded watermark detectable after being attacked. On the other hand, the fragile watermark aims to be sensitive to any attack on the model and locate the changed regions and possibly predict how the model was before modification. Therefore, fragile watermarking is used for content authentication and verification. The semi-fragile watermark combines both the advantages of the robust watermark and the fragile watermark so that it is more robust than fragile watermark and less sensitive to classical user modifications that aims to discriminate between malicious and no malicious attack.

After the great interests and works for multimedia contents watermarking like in image, audio, video and text and with the growth of 3D graphical models generation, and the spread of using it in data representations of other applications like fuel or water transferring pip models, 3D cartoon models etc, recently, researchers have a great interest in watermarking of 3D models.

In this paper, we presented a comparative analysis of two adaptive fragile watermarking techniques [1, 2] and clarify their advantages and weakness area. The paper is organized as follows, section 2 previews the fragile watermarking from the state of the art. Section 3 briefly explains the methods used in this study. Section 4 shows the experimental result with empirical analysis. Finally, conclusions are provided in Section 5.

Related work

Watermark embedding strategies primarily are divided into two classes; additive and substitutive. In the case of additive strategy, a watermark is considered as a random noise pattern which is added to the mesh surface as in [3-6]. But in the case of substitutive, the watermark is embedded in the numerical values of the mesh elements by a selective bit substitution as in [1,2,7,8,9]. Based on this embedding style, the watermark may be embedded in different embedding primitives as follows:

A. Data file organization.

This category utilizes the redundancy of polygon models to carry information. Ichikawa et al [10] modified the order of the triangles (the order of the triplet of vertices forming a given triangle). They only use the redundancy of description. Wu et al [11] used the mesh partitioning to divide the mesh into patches with a fixed number of vertices. While the geometrical and topological information of each patch, as well as other properties (color, texture, and material), are used to produce the hash value which represents the signature embedded in the model. The goal of Bennour et al [12] was to protect the visual presentations of a 3D object in images or videos after it has been marked. They also proposed an extension of 2D contour watermarking algorithm to a 3D silhouette. Sales et al [13] presented a method based on the protection of the intellectual rights of 3D objects through their 2D projections.

B. Topological data

These algorithms use the topology of the 3D object to embed the watermark which leads to change the triangulation of the mesh. Ohbuchi et al [14] presented two visible algorithms where the local triangulation density is changed to insert a visible watermark depending on the triangle similarity quadruple (TSQ) algorithm. Whereas the second is to embed a blind watermark by topological ordering TVR (Tetrahedral Volume Ratio) method. Mao et al ’method [15] triangulated a part of a triangle mesh to embed the watermark into the new positions of the vertices, this algorithm is considered a reversible as it allows a full extraction of the embedded information and a complete restoration of the cover signal.

C. Geometrical Data

Most of the 3D fragile watermarking algorithms embed the watermark by modifying the geometry of the 3D object either in the spatial domain or in the frequency domain. Yeo and Yeung [16] proposed the first 3D fragile watermarking algorithm where each of the vertex information is modified by slightly perturbing the vertex based on a pre-defined hash function to make all vertices valid for authentication. Lin et al. [4], and Chou et al. [17] solved the causality problem raised in Yeo’s method by setting both hash functions depending only on the coordinates of the current vertex [4] and proposed a multi-function vertex embedding method and an adjusting-vertex method [17]. With considering high-capacity watermarks, Cayre and Macq [18] considered a triangle as a two-state geometrical object and classify the triangle edges based on the traversal into entry edge and exit edge, where the entry edge is modulated using Quantization index modulation (QIM) to embed watermark bits.

To immune similarity transformation attacks, Chou et al [19] embedded watermarks in a subset of the model’s faces so that any changes will ruin the relationship between the mark faces and neighboring vertices. Huang et al. [20] translated the 3D model into the spherical coordinate system, then used the QIM technique to embed the watermark into the r coordinate for authentication and verification. Xu and Cai [21] used the Principal Component Analysis PCA to generate a parameterized spherical coordinates mapping square-matrix to embed a binary image (watermark). Wang et al. [1] used the hamming code to calculate the parity bits that embedded in each vertex coordinate with the LSB substitution to achieve verification during the extraction stage. According to the problem of high collision characteristic of hash function used for generating the watermark from the mesh model Wang et al. [2] employed a chaotic sequence generator to generate the embedded watermark to achieve both the authentication and verification of the model.

Substitutive fragile watermarking techniques

Watermarking techniques can be classified into a different category depending on many attributes. Among the different attributes, watermark techniques can be classified according to watermark generation pattern which relays on the application type, the watermark may be an external information specific to the model – that must be kept secured – or may be an information that is not related to the model. Generally, there are two ways of watermark generation pattern:

  1. Self-embedding: which means the watermark embedded in the cover model is a compressed version (the hash of the cover model or error correction code) of the same model by some embedding strategy.
  2. External embedding: means that the watermark is an external information related or not related to the cover model. This external information could be text data, image data or pseudo-random bit sequence. And it is a need to transform the embedded data to binary bit sequence before embedding.

According to this classification, Wang [1, 2] proposed two fragile watermarking techniques based on substitutive embedding method. Where they used the Least Significant Bit (LSB) substitution embedding method. At first technique [1], an adaptive watermark is generated from each cover model by using the Hamming code technique for 3D objects verification. While the hamming code is used to generate three parity bits from each vertex, they are used for verification during the extraction stage. These three parity check bits P1, P2, and P3 are regarded as the watermark, which embedded in each vertex coordinate by the least significant bit (LSB) substitution. Leading to increasing the data hiding capacity but on the other hand, the embedding distortion to the model is uncontrollable. Authors claimed the method to be immune to the causality, convergence and embedding holes problems.

The second technique [2], proposed a novel Chaos sequence based fragile watermarking scheme for 3D models in the spatial domain. Where the authors used the chaotic sequence generated from the Chen-Lee system, which is considered as the embedded watermark. Then they embedded the watermark in each vertex coordinate according to a random sequence of integers generated by using a secret key K, to achieve both the authentication and verification. Instead of the hash function, the tampering region can be verified and located by the Chaos sequence-based watermark check.

Both techniques are simple to implement and don’t need the original model or the watermark for the 3D models verification and tampering detection localization, as they don’t depend on using the hash function for authentication and verification. Also they achieve high embedding capacity, since they used all the vertices of the model for embedding. For the second technique, from the security point of view, finding the Chaos sequence is a challenge for an attacker. Security was also achieved by using secret keys to embed the watermarks

Experimintal result and desscution

The two techniques of Wang et al [1, 2] are were implemented using a multi-paradigm numerical computing environment and a proprietary programming language developed by MathWorks (MATLAB R2018a).

Assessment Methods:

The main requirements to provide an effective watermark are imperceptibility, robustness against intended or non-intended attacks and capacity. Based on these requirements a series of experiments were conducted to measure the imperceptibility robustness. Table 1 illustrates the assessment measures needed to evaluate watermarking systems [22].

Performance Assessment measures used in mesh watermarking

Assessment Type

Assessment measure

Formula

Imperceptibility measures

Hausdorff

distance (HD)

Modified

Hausdorff distance (MHD)

Root mean

square error (RMSE)

Robustness measures

Correlation coefficient

Where, the RMSE measures the differences between the values predicted by the model or an estimator and the values observed. Lower values of RMSE indicate better fit. When the RMS values are small this indicates insignificant positional changes during the watermark embedding. The Hausdorff distance measures “how similar” two sets are in the metric sense. If two sets are in a small Hausdorff distance, they are supposed to “look” almost the same. The Modified Hausdorff distance computes the forward and reverse distances and outputs the minimum of both. The Correlation Coefficient (CC) measures the degree (strength) of the relationship between two variables. The range of values for the correlation coefficient is -1.0 to 1.0., whereby a correlation of -1.0 indicates a perfectly negative correlation and a correlation of 1.0 indicates a perfectly positive correlation. A value of zero indicates that there is no relationship between the two variables. Generally, the correlation coefficient used to measure the change in the bit values of the original watermark and the extracted watermark, meanly it measures how the watermark robust to the attacks. Since in the fragile watermark, the aim is to be sensitive to any attacks, and to detect any tempering to the model, we measured the CC metric between the original watermark W and the extracted watermark W’, and between the original model M and watermarked model M’.

We have applied the measures on both [1] and [2] techniques using 7 models. In the hamming code-based technique, the author normalized the 3D model into the range 0 to 1, to embed the watermark but they didn’t perform the denormalization after embedding. I our experiment, the algorithm has been used as mentioned in the paper [1] and after performing the denormalization step as well. the results of applying the technique on the models without the denormalization step is presented in Table 2. Table 3 shows the measurement metrics after applying denormalization to the 3D model, which obviously show that the values of the RMS are less than the first values which indicates minimal positional changes during the watermark embedding. Fig.1 shows the model before and after embedding the watermark without denormalization while Fig.2 shows the model before and after embedding the watermark after denormalization. Moreover, Fig.3 and Fig.4 show the difference between the X, Y, and Z values of the vertices of the original and the watermarked model without using normalization and with normalization respectively.

Table 4 shows the measurement metrics of the Chaos sequence based fragile technique [2]. Which illustrates that the imperceptibility measures are less than the previous technique. And this technique doesn’t distort the model after watermark embedding. Fig.5 shows the model before and after watermarking and the difference between the vertices I XYZ coordinate system.

Hamming code based fragile technique mesurments without Denormalization

Model

No. vertices/

faces

Imperceptibility measures

Robustness measure

HD

MHD

RMSE

CC (M,M’)

CC (W,W’)

Cow

2904/ 5804

0.9043

0.4848

0.4965

0.8995

1.0000

Casting

5096/ 10224

1.1005

0.4039

0.4912

0.9008

1.0000

Bunny

1355/ 2641

1.2419

0.6803

0.5303

0.9765

1.0000

Bunny_

bent

1355/ 2641

1.4291

0.6760

0.5450

0.9549

1.0000

hemi_

bumpy

1441/ 2816

1.5671

0.7331

0.5592

0.7941

1.0000

Bunny

34835/ 69666

0.9822

0.5176

0.4866

0.9544

1.0000

hand

36619/ 72958

1.0941

0.4169

0.4811

0.8988

1.0000

Hamming code based fragile technique mesurments – After Denormalization

Model

No. vertices/

faces

Imperceptibility measures

Robustness measure

HD

MHD

RMSE

CC (M,M’)

CC (W,W’)

Cow

2904/

5804

1.5685e-15

5.0240e-16

3.4358e-16

1.0000

-0.0050

Casting

5096/

10224

1.8388e-15

5.3842e-16

3.6531e-16

1.0000

0.0056

Bunny

1355/

2641

1.9375e-15

7.2551e-16

4.7291e-16

1.0000

-0.0253

Bunny_

bent

1355/

2641

2.4139e-15

8.4952e-16

5.5116e-16

1.0000

-0.0057

hemi_

bumpy

1441/

2816

3.1563e-15

9.8454e-16

6.5371e-16

1.0000

0.0048

Bunny

34835/

69666

1.7844e-15

5.2505e-16

3.4298e-16

1.0000

0.0011

hand

36619/

72958

1.8113e-15

5.1365e-16

3.5427e-16

1.0000

0.0117

[image: c1][image: c2]

(a) (b)

(a) Original Caw model, (b) Stego Caw model after implementing Hamming technique without denormalization

[image: c1] [image: c2]

(a) (b)

(a) Original Caw model, (b) Stego Caw model after implementing Hamming technique after denormalization

[image: c3][image: c4][image: c5]

(a) (b) (c)

Change in x,y and z coordinates after applying Hamming code technique [1] without denormalization

[image: c3][image: c4][image: c5]

(a) (b) (c)

Change in x,y and z coordinates after applying Hamming code technique [1] then applying the denormalization

Chaos sequence based fragile technique mesurments

Model

No. vertices/

faces

Imperceptibility measures

Robustness measure

HD

MHD

RMSE

CC (M,M’)

CC (W,W’)

Cow

2904/ 5804

8.9034e-16

2.8523e-16

1.9411e-16

1

1

Casting

5096/ 10224

1.0270e-15

2.7498e-16

1.8997e-16

1

1

Bunny

1355/ 2641

3.9374e-16

1.5555e-15

2.6232e-16

1

1

Bunny_

bent

1355/ 2641

4.0792e-16

1.6542e-15

2.8374e-16

1

1

hemi_

bumpy

1441/ 2816

1.6514e-15

5.3170e-16

3.5182e-16

1

1

Bunny

34835/ 69666

9.0876e-16

2.3628e-16

1.6133e-16

1

1

hand

36619/ 72958

8.7595e-16

2.4214e-16

1.6964e-16

1

1

[image: ][image: ]

(a) (b)

[image: ][image: ][image: ]

(c) (d) (e)

(a) Original Caw model, (b) Stego Caw model after implementing Chaos based technique, (c) , (d) and (e) the change in x,y and z coordinates.

By analyzing these techniques we found that they achieved high embedding capacity as they used all of vertices for embedding that also leads to high distortion. To avoid this distortion, we suggested selecting the best vertices for embedding by using one of computational intelligent CI techniques named neural network.

Conclusion

In this paper, we presented a comparative analysis between two substitutive fragile watermarking algorithms, by clarifying the points of strength and weaknesses. The main requirements to design an effective watermark are imperceptibility, robustness against intended or non-intended attacks and capacity. We have used the RMSE, HD, and MHD to measure the imperceptibility, and the Correlation Coefficient was often used to measure the robustness of the watermark, but we use it to measure the sensitivity of the watermark as shown in the experiment result.

References

  1. J. T. Wang, Y. C. Chang, S. S. Yu and C. Y. Yu, Hamming Code Based Watermarking Scheme for 3D Model Verification. 2014 International Symposium on Computer, Consumer and Control, Taichung, 2014, pp. 1095-1098.
  2. J. T. Wang, W. H. Yang, P. C. Wang and Y. T. Chang, A Novel Chaos Sequence Based 3D Fragile Watermarking Scheme. 2014 International Symposium on Computer, Consumer and Control, Taichung, pp. 745-748. 2014
  3. B. L. Yeo and M. M. Yeung, Watermarking 3D objects for verification. IEEE Computer Graphics and Applications, vol. 19, no. 1, pp. 36-45, Jan.-Feb. 1999.
  4. H. S. Lin, H. M. Liao, Chun-Shien Lu and Ja-Chen Lin, Fragile watermarking for authenticating 3-D polygonal meshes. in IEEE Transactions on Multimedia, vol. 7, no. 6, pp. 997-1006, Dec. 2005.
  5. N. Werghi, N. Medimegh, & S. Gazzah. Data embedding of 3d triangular mesh models using ordered ring facets. In 2013 10th international multi-conference on systems, signals & devices (SSD). IEEE pp. 1–6. 2013.
  6. H. T. Wu, & Y. M. Cheung. A fragile watermarking scheme for 3d meshes. In Proceedings of the 7th workshop on multimedia and security. ACM. pp. 117–124. 2005
  7. W. B. Wang, G. Q. Zheng, J. H. Yong, & H. J. Gu. A numerically stable fragile watermarking scheme for authenticating 3d models. Computer-Aided Design, 40(5), 634–645. 2008.
  8. Y. P. Wang, & S. M. Hu. A new watermarking method for 3d models based on integral invariants. IEEE Transactions on Visualization and Computer Graphics, 15(2), 285–294. 2009.
  9. J. T. Wang, C. M. Fan, C. C. Huang, & C. C. Li. Error detecting code based fragile watermarking scheme for 3D models. In 2014 international symposium on computer, consumer and control (IS3C). IEEE .pp. 1099–1102. 2014.
  10. S. Ichikawa, H. Chiyama, & K. Akabane. Redundancy in 3D Polygon Models and Its Ap-plication to Digital Signature. Journal of WSCG, 10(1), 225–232.2002.
  11. [bookmark: _Hlk532642167]H. T. Wu, and Y. M Cheung. Public authentication of 3d mesh models. 2006 IEEE/WIC/ACM International Conference on Web Intelligence (WI 2006 Main Conference Proceedings)(WI’06), Hong Kong, 940-948.2006, December.
  12. J. Bennour, & J. L. Dugelay. Protection of 3D object visual representations. 2006 IEEE International Conference on Multimedia and Expo, Toronto, Ont., 1113-1116.2006, July.
  13. M. M. Sales, P. RondaoAlface, & B. Macq. 3D objects watermarking and tracking of their visual representations. The Third International Conferences on Advances in Multime-dia. 2011.
  14. R. Ohbuchi, H. Masuda, & M. Aono. Watermarking three-dimensional polygonal models through geometric and topological modifications. IEEE Journal on selected areas in communi-cations, 16(4), 551-560.1998.
  15. X. Mao, M. Shiba, & A. Imamiya. Watermarking 3D geometric models through triangle subdivision. Security and Watermarking of Multimedia Contents III. 4314, 253-260.2001, August.
  16. B. L.Yeo, & M. M. Yeung. Watermarking 3D objects for verification. IEEE Computer Graphics and Applications, 19(1), 36-45. 1999.
  17. C. M. Chou, & D. C. Tseng.A public fragile watermarking scheme for 3D model authen-tication. Computer-Aided Design, 38(11), 1154-1165.2006.
  18. F. Cayre, & B. Macq. Data hiding on 3-D triangle meshes. IEEE Transactions on signal Processing, 51(4), 939-949.2003.
  19. C. M. Chou, & D.C. Tseng. Affine-transformation-invariant public fragile watermarking for 3D model authentication. IEEE computer graphics and applications, 29(2), 72-79.2009.
  20. C. C. Huang, Y. W. Yang, et al. spherical coordinate based fragile watermarking scheme for 3D models. In International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, Springer, Berlin, Heidelberg. 566-571.2013, June.
  21. T. Xu and Z. Cai. A Novel Semi-fragile Watermarking Algorithm for 3D Mesh Models. 2012 International Conference on Control Engineering and Communication Technology, Liaoning, pp. 782-785. 2012.
  22. S. Borah, & B. Borah. Watermarking Techniques for Three Dimensional (3D) Mesh Authentication in Spatial Domain. 3D Research. 9. 10.1007/s13319-018-0194-7.2018