What Is an Algorithm and How Does It Works

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Introduction

Algorithms are effective definite methods used in data processing, calculations, and automated reasoning. It starts from the feeding of the input to the system and it executes the instructions producing an output. Algorithms allow the presentation of information extracted from a large database in a form easily understood by many people. Learning of these rules entails the creation of a decision tree then changing it into a rule set, which is finally simplified or determining the most powerful rule then used repeatedly first by eliminating data covered by the rule. Every algorithm has a definite number of successive stages from the initial input stage to the termination stage of output.

Bayesian network

Bayesian network (B) is a network structure of a directed acyclic graph of a set of variables (U) and it represents a probability distribution of the variables (U) in a given set of data. Like other rules and instructions, this network structure contains some assumptions; for instance, it assumes that all variables in the data set are discrete definite variables and if not, they are ‘discretized’ first before rule application. Another assumption is that the data set has no missing variables and if there are, the missing values are filled (Bouckaert 4). The initial step in the application of the network entails the confirmation of whether the data set meets all the assumptions and if not met, automatic filtering becomes necessary, which prompts a warning signal to appear.

In using the Bayesian network, one simply calculates the argmax P(y/x) using the distribution P(U) of the Bayesian structure. These variables are in probability distribution as described as P(y/x) = P(U)/P(x). Successful application of this rule involves first learning the network structure than the probability tables and this calls for the use of different approaches such as local score metrics, conditional independence tests, global score metrics, and fixed structure strategies (Bouckaert 5). Each case here uses a different search algorithm; for instance, annealing, hill climbing, simulated, and tabu searches in aid of searching for a good network structure. This opens the way for the estimation of a conditional probability table for each variable.

Fast effective rule reduction

Incrementally reduced error pruning (IREP) consolidates reduced error pruning and separate-and-conquer rules in algorithm learning aimed at reducing unacceptably large errors from large noisy data sets. In creating a rule of IREP, randomizing the variables into two subsets is necessary where either variable is in growing sets or pruning sets (Cohen 3). This allows the formulation of the Grow rule, which starts with empty conjunction of conditions and keeps on adding any condition if nominal attributes equal legal values added. The addition of the conditions maximizes the FOIL information until the rule covers no negative variables in the growing data set. After the generation of the rule

allows for the pruning of the rule which involves deleting of some conditions to maximize the function v=p+ (N-n)P+ N

The deletion continues until further deletion does not affect the value of V.

PART algorithm

PART algorithm integrates both the C4.5 and RIPPER algorithms and avoids the shortcomings of the two algorithms. Its main advantage is that it does not require global optimization for accurate rule set production. It adopts the separate and conquer strategy in building a rule and removes instances covered as it continues recurrently in the remaining instances until none is left (Eibe, and Witten 4). In essence, it begins with the formation of a pruned decision tree in which the leaf with the largest coverage becomes the single rule applied in all instances. The tree thus remains discarded with formation of the rule to cover the data set and therefore regarded as partial decision tree.

Conclusion

Computational procedures follow definite steps to elicit accurate results from both noisy and complex data sets. These rules, for instance the PART algorithm, Incremental reduction error pruning, and Bayesian network follow a definite sequence from the time of input to the final stage of output in which the information generated from the complex database is in a form easily understood by many.

Works cited

Bouckaert, Remco. Bayesian network classifiers in weka. United States: University of Waikato, 1999.

Cohen, William. Fast effective rule induction. France: Chambery, 1993.

Eibe, Frank, and Witten, Ian. Generating accurate rules sets without global optimization, New Zealand: Hamilton, 1997.

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!