Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.
Introduction
Fairness in ML
Algorithmic decision making is becoming part of our life. Every predictive model decide on basis of data provided i.e., on which it is trained. So we train our model on huge amount of data which has improved accuracy to a very good and in some case better than even humans. As decision making relies heavily on data so if data is unfair to any group than the model output will also be giving output which will be unfair to that group as model is highly based on data. so there is need to have a technique which can limit this unfairness.
Anomaly detection
Anomaly in context of machine learning means getting an observation which deviate from normal observation. Anomaly detection means finding pattern in data such that that pattern is not in expected behaviour. This behaviour can be in any form like outliers, exceptions, etc. In todays weorld anomaly detection is used in variety of applications like cybersecurity, credit card fraud, and faults in safety critical system.
Organization of The Report
This chapter provides a background for the topics covered in this report. We provided a de- scription of Fairness in unsupervised learning, and Anomaly detection. Then we described the different kind of fairness notion and different types of anomaly. The rest of the chapters are organised as follows: next chapter we provide review of prior works. In Chapter 3, we discuss our new algorithms for fairness in anomaly detection. And finally in chapter 4, we conclude with some future works.
Review of Prior Works
Here we will briefly discuss few paper on fairness and anomaly detection.
Fairness Constraints: A fair approach for fair classification
Here they work on three definition of fairness. firstly disparate treatment which means unequal treatment to someone based on some special feature being all other feature similar. disparate impact means unfairness to certain group in comparison to other group. diparate mistreatment means that its accuracy is different for different group. disparate treatment is direct fairness while disparate impact and disparate mistreatment is indirect fairness as disparate treatment directly depend on sensitive feature to get fairness. here they handled this three unfairness using constraint based approach where they tradeoff accuracy to achieve fairness.
Classifying without Discriminating
This paper used pre-processing on data to reduce unfairness. They firstly massage data such that it remove discrimination. But this approach can only reduce disparate treatment as we don’t know about future decisions. This method also decreases accuracy as we are directly removing data but they try to change data as minimal as possible so accuracy do not get affected much.
Certifying and Removing Disparate Impact
In selection process there is unintentional bias to certain group. The basis on which this bias is can be religion, cast, gender, etc which is as protected class. As this selection process is not open and there is no way to get how selection occurs so in this paper they analysed the data to remove disparate treatment. they in this paper gave way in which data can be made unbiased and they gave empirical proof of their test on given data.
Algorithmic Decision Making and the Cost of Fairness
They proposed a case study on defendant waiting on trial. They found that due to data black defendant are wrongly classified mare in comparison to white defendant. They in- vestigated this on disparate impact and disparate mistreatment. they defined threshold based on rules that it should satisfy fairness notion and 30% defendant should be detained and maximise public safety with satisfing the mentioned two. while satisfying fairness constraint it reduces public safety. If it try to maximise public safety it violates fairness constraints.
Anomaly Detection – A Survey
Anamoly detection is to define region which show normal behavior and observations which do not belong t o this normal representation. There are many challenges faced while anomaly detection like defining the region for points having normal behaviour, the boundary defining outlier is not definite, definition of outlier is different for different application domains, data might contain noise and there is limit to data available for training and anomaly in data is rare in most cases. This survey specify different asspect of anomaly detection and different techniques used for each aspect and also specifying pros and cons of each approach.
Conclusion
Most of the prior work mentioned here have one of the following drawback except fairness constraint classifier. They are designed for specific type of fairness notion or they can work with only one sensitive feature or they are limited to few classification model. fairness constraint classifier can have multiple sensitive features and it provide fairness with each notion together but with the loss of accuracy.
Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.