Fairness Audits: Demographic Parity Inspector¶
- class nobias.audits.ExplanationAudit(model, gmodel)¶
Bases:
BaseEstimator
,ClassifierMixin
Given a model, a dataset, and the protected attribute, we want to know if the model violates demographic parity or not and what are the features pushing for it. We do this by computing the shap values of the model, and then train a classifier to distinguish the protected attribute.
Example¶
>>> import pandas as pd >>> import numpy as np >>> from sklearn.linear_model import LogisticRegression >>> from xgboost import XGBRegressor >>> from fairtools.detector import ExplanationAudit >>> N = 5_000 >>> x1 = np.random.normal(1, 1, size=N) >>> x2 = np.random.normal(1, 1, size=N) >>> x34 = np.random.multivariate_normal([1, 1], [[1, 0.5], [0.5, 1]], size=N) >>> x3 = x34[:, 0] >>> x4 = x34[:, 1] >>> # Binarize protected attribute >>> x4 = np.where(x4 > np.mean(x4), 1, 0) >>> X = pd.DataFrame([x1, x2, x3, x4]).T >>> X.columns = ["var%d" % (i + 1) for i in range(X.shape[1])] >>> y = (x1 + x2 + x3) / 3 >>> y = 1 / (1 + np.exp(-y)) >>> detector = ExplanationAudit(model=XGBRegressor(), gmodel=LogisticRegression()) >>> detector.fit(X, y, Z="var4") >>> detector.get_auc_val()
- explanation_predict(X)¶
- explanation_predict_proba(X)¶
- fit(X, y, Z)¶
- fit_audit_detector(X, y)¶
- fit_model(X, y)¶
- get_auc_f_val()¶
TODO Case of F being a classifier
- get_auc_val()¶
Returns the AUC of the validation set
- get_coefs()¶
- get_explanations(X)¶
- get_gmodel_type()¶
- get_linear_coefs()¶
- get_model_type()¶
- get_split_data(X, y, Z, n1=0.6, n2=0.5)¶
- predict(X)¶
- predict_proba(X)¶