Author: Nele Albers
Date: January 2025
In this file, we reproduce the weights participants assignd to the different allocation principles in our post-questionnaire. Specifically, we compute:
Required files:
Output files:
Authored by Nele Albers, Francisco S. Melo, Mark A. Neerincx, Olya Kudina, and Willem-Paul Brinkman.
Let's import the packages we need.
import pandas as pd
import pickle
And we load the anonymized data from the post-questionnaire.
df_post = pd.read_csv("Data/postquestionnaire_anonym.csv")
Let's print how many people filled in their preferences for the allocation principles.
num_ratings = df_post["ethical_preferences_1"].count()
print("Number of people with preferences for allocation principles:", num_ratings)
Number of people with preferences for allocation principles: 449
print("\n***Ethical rule preferences (grouped)***")
rule_names_df = ["ethical_preferences_1", # autonomy
"ethical_preferences_2", # random
"ethical_preferences_3", # least amount so far
"ethical_preferences_4", # longest time since last feedback
"ethical_preferences_5", # youngest first
"ethical_preferences_6", # largest increase in chance of successfully preparing for quitting
"ethical_preferences_11", # largest reduction in negative consequences
"ethical_preferences_7", # least likely to successfully prepare without feedback
"ethical_preferences_12", # most likely to experience negative consequences without feedback
"ethical_preferences_9", # largest value to society in future
"ethical_preferences_10"] # past usefulness or sacrifice
rule_groups_survey = {"autonomy": ["ethical_preferences_1"],
"equal": ["ethical_preferences_2",
"ethical_preferences_3",
"ethical_preferences_4"],
"priority": ["ethical_preferences_5",
"ethical_preferences_9",
"ethical_preferences_10"],
"prognosis": ["ethical_preferences_6",
"ethical_preferences_11"],
"sickest-first": ["ethical_preferences_7",
"ethical_preferences_12"]}
weights = {}
for rule_group in rule_groups_survey:
total = 0
for rule in rule_groups_survey[rule_group]:
total += df_post[rule].mean()
print(rule_group, ":", round(total, 2), "%")
weights[rule_group] = total/100
with open("Data/ethical_principle_weights", 'wb') as f:
pickle.dump(weights, f)
***Ethical rule preferences (grouped)*** autonomy : 8.62 % equal : 22.18 % priority : 13.04 % prognosis : 30.82 % sickest-first : 25.34 %
And we also print the weights for each single allocation principle, thus reproducing the weights from Supplementary Table 4.
rule_names_understandable = ["Autonomy",
"Random",
"Least amount so far",
"Longest time since last feedback",
"Youngest first",
"Largest increase in chance of successfully preparing for quitting",
"Largest reduction in negative consequences",
"Least likely to successfully prepare without feedback",
"Most likely to experience negative consequences without feedback",
"Largest value to society in future",
"Past usefulness or sacrifice"]
for rule_idx, rule in enumerate(rule_names_df):
weight = df_post[rule].mean()
print(rule_names_understandable[rule_idx], ":", round(weight, 2), "%")
Autonomy : 8.62 % Random : 9.69 % Least amount so far : 6.04 % Longest time since last feedback : 6.45 % Youngest first : 5.31 % Largest increase in chance of successfully preparing for quitting : 16.42 % Largest reduction in negative consequences : 14.4 % Least likely to successfully prepare without feedback : 13.51 % Most likely to experience negative consequences without feedback : 11.82 % Largest value to society in future : 3.83 % Past usefulness or sacrifice : 3.9 %
And we also compute the relative weights for principle groups compared to prognosis. We need this for our analysis for RQ3.
rule_groups_without_prognosis = ["autonomy", "equal", "priority", "sickest-first"]
relative_weights = {}
for rule_group_name in rule_groups_without_prognosis:
relative_weight = (weights[rule_group_name]/weights["prognosis"]) / (1 + weights[rule_group_name]/weights["prognosis"])
print("Relative weight for", rule_group_name, ":", round(relative_weight, 2))
relative_weights[rule_group_name] = relative_weight
with open("Data/ethical_principle_relative_weights_with_prognosis", 'wb') as f:
pickle.dump(relative_weights, f)
Relative weight for autonomy : 0.22 Relative weight for equal : 0.42 Relative weight for priority : 0.3 Relative weight for sickest-first : 0.45