Fairness Evaluation and Model Explainability In AI

Artificial Intelligence (AI) relies on machine learning and its modelling to provide outcomes. Imagine a credit card company receives hundreds of thousands of applications each year and would like to use AI to do the first round of application filtering. If it is not developed properly, the AI decisions will be skewed with unfairness – such as rejecting a lot more applications from a certain age group, from a certain gender, or from a certain employment history pattern – UNFAIRLY. Please note unfairness is the key issue here – in other words, AI and machine learning inference that is not reflecting the due course in actual situations. This is typically called bias in AI and machine learning.


Biases can come from various stages of a machine learning life cycle:


Biases may exist in the pre-training data -  e.g., the dataset to be used in machine learning training has biases in it

Biases may be introduced by the machine learning exercise 


Below is a machine learning lifecycle chart from the AWS website:





Biases may exist in multiple phases, from ‘Dataset construction’ to ‘Algorithm Selection’ to ‘Monitoring / Feedback’. 

AWS has a powerful tool on combating biases in machine learning – Amazon SageMaker Clarify. It provides the following functionalities:

  • Evaluation: evaluate fairness and identify biases.
  • Explainability: explain how input features contribute to the machine learning model predictions during model development and inference.
  • Compliance program support: detect biases and other risks as prescribed by guidelines, such as ISO 42001, in all lifecycle phases: data preparation, model customisation, and deployed models.

The following chart shows the SageMaker Clarify processing flow:


                                                                                        (from AWS website)


As mentioned above, this Clarify processing flow can be run at multiple phases in the machine learning lifecycle. 

Clarify can compute pre-training bias metrics that help data scientists to understand the bias in the data so that it can be addressed, and a more fair dataset used instead. 

Clarify can generate post-training bias metrics that help to understand any bias introduced by an algorithm or hyperparameter choices, so that the algorithm used can be further tuned.

Clarify can provide Partial Dependence Plots (PDPs) to help understand how much the predicted target variable would change if the value of one feature is changed.

Clarify can provide feature attributions based on the Shapley values. A Shapley value is a concept in game theory. It provides a way to quantify the contribution of each player to a game, and hence the means to distribute the total gain generated by a game to its players based on their contributions. Amazon SageMaker Clarify adopts it in the machine learning context. In this context, Clarify treats the prediction of the model on a given instance as the game and the features included in the model as the players. As such, a way to determine the contribution that each feature makes to model predictions is provided. These attributions can be provided for specific predictions and at a global level.


For explanatory purposes, here is a high level experimental run on using SageMaker Clarify:

The data scientist uploads the dataset to a S3 bucket. 

Then they use a standard XGBoost model to train. Below is a sample code block:


# This references the AWS managed XGBoost container
xgboost_image_uri = retrieve("xgboost", region, version="1.5-1")

xgb = Estimator(
    xgboost_image_uri,
    role,
    instance_count=1,
    instance_type="ml.m5.xlarge",
    disable_profiler=True,
    sagemaker_session=sagemaker_session,
)

xgb.set_hyperparameters(
    max_depth=5,
    eta=0.2,
    gamma=4,
    min_child_weight=6,
    subsample=0.8,
    objective="binary:logistic",
    num_round=800,
)

xgb.fit({"train": train_input}, logs=False)


And a SageMaker model is created. Again, a sample codeblock looks like this:

model_name = "DEMO-clarify-model-{}".format(datetime.now().strftime("%d-%m-%Y-%H-%M-%S"))
model = xgb.create_model(name=model_name)
container_def = model.prepare_container_def()
sagemaker_session.create_model(model_name, role, container_def)


With such setup, Clarify can be used to detect possible pre-training and post-training biases using a variety of metrics.

The following flow applies:

A DataConfig object:

bias_report_output_path = "s3://{}/{}/clarify-bias".format(bucket, prefix)
bias_data_config = clarify.DataConfig(
    s3_data_input_path=train_uri,
    s3_output_path=bias_report_output_path,
    label="Target",
    headers=training_data.columns.to_list(),
    dataset_type="text/csv",
)


A ModelConfig object:

model_config = clarify.ModelConfig(
    model_name=model_name,
    instance_type="ml.m5.xlarge",
    instance_count=1,
    accept_type="text/csv",
    content_type="text/csv",
)


A ModelPredictedLabelConfig:

predictions_config = clarify.ModelPredictedLabelConfig(probability_threshold=0.8)


A BiasConfig:

bias_config = clarify.BiasConfig(
    label_values_or_threshold=[1], facet_name="Sex", facet_values_or_threshold=[0], group_name="Age"
)


As discussed above, bias can be present in the data before any model training. Inspecting pre-training data for bias helps to detect any data collection gaps, guide feature engineering, and help to understand what societal biases the data may reflect. The above-described way to generate pre-training bias metrics does not require a trained model.

On the other hand, working on post-training bias metrics does require a trained model. Unbiased training data may still result in biased model predictions after training. There are several factors involved such as hyperparameter choices.

Bias reports can be provided in Clarify:



As we can see from the above articulation and an experimental run, Amazon SageMaker Clarify can help to improve AI machine learning models by detecting potential bias and by explaining how these models make predictions. The fairness and the explainability functionality provided by Clarify helps to lift the trustworthiness of Artificial Intelligence.
  
                                        -- Simon Wang



Comments

Popular posts from this blog

AWS Storage Gateway File Gateway with S3 and FSx For Lustre with S3

AWS and Generative AI