Skip to content

Latest commit

 

History

History
112 lines (82 loc) · 9.98 KB

README.md

File metadata and controls

112 lines (82 loc) · 9.98 KB

Global Data Ethics Pledge (GDEP) Awesome Humane Tech


Working Groups for v2.0 of the Pledge are being formed right now and you can read more about them and join on the Working Groups page.

The 5 Principles:

  • I COMMIT TO FAIRNESS
  • I COMMIT TO OPENNESS
  • I COMMIT TO RELIABILITY
  • I COMMIT TO TRUST
  • I COMMIT TO SOCIAL BENEFIT

Click here to join the movement today. Become a data defender for life.

Fairness:
I will make make a dedicated effort to understand, mitigate and communicate the presence of bias in both data practice and consumption.

Openness:
I will practice humility and openness. Transparent practices, community engagement, and responsible communications are an integral part of my data ethics practice.

Reliability:
I will ensure that every effort is made to glean a complete understanding of what is contained within data, where it came from, and how it was created. I will also extend this effort for future users of all data and derivative data.

Trust:
I wish to build public confidence in data practitioners. I will make every effort to use data and algorithms in ways that maximize the informed participation of people around the world.

Social Benefit:
I set people before data and intend to be responsible for maximizing social benefit and minimizing harm. I will consider the impact of my work on communities of people, other living beings, ecosystems and the world-at-large.

Commit to the Pledge and help us build a movement that collectively works towards a more accountable, equitable, open, inclusive, and transparent data community. The more of us who commit to The Pledge and adopt these principles, the more leverage we have to create change in the collective work we produce and how we produce it. We can affect hearts and minds and drive policy to a better place for all.

Data for Democracy’s Global Data Ethics Pledge is a global initiative aimed at creating the industry’s most widely adopted framework and community for ethics in data science.

Data for Democracy partnered with Bloomberg and BrightHive to develop a **code of ethics for data scientists, software developers and data analyzers of all types, (that means you 😄 ) **. This code aims to define values and priorities for overall ethical behavior, in order to guide anyone handling data to be a thoughtful, responsible agent of positive change. The code of ethics is being developed through a community-driven approach.

By hosting discussions among our data-focused community, we hope to better capture the diverse interests, needs, and concerns that are at play in the community, and put together a code that is truly created by the data community, for the data community.

Read more here.

Ethics Principles


  • Bias will exist. It's my responsibility to understand, mitigate and communicate the presence of bias in algorithms.
  • It's my responsibility to increase social benefit while minimizing harm.
  • I will practice humility and openness.
  • I will know my data and help future users know it as well.
  • I will make reasonable efforts to know and document its origins and document its transformation.
  • I will document transparently, accessibly, responsibly, reproducibly, and communicate.
  • I will strive to engage "the whole community" using, or impacted by, my work. I will try to clarify or redefine the definition of this community, on an ongoing basis. I will use skepticism and an open mind to expand this definition.
  • I will put people before data.
  • I will not over/under represent findings.
  • Data scientists should use a _question-driven approach_ rather than a _data-driven_ or _methods_ approach. Data scientists should consider personal safety and treat others the way they want to be treated.
  • Open by default - use of data should be transparent and fair.
  • I am part of an ecosystem and it is my ethical obligation to produce considerate analysis conducive to standards of minimal impact and risk to communities, people, and the world
  • I will respect others' data even more than my own. I will seek to understand the sources of the data and the consequences of my actions.
  • I will protect individual and institutional privacy.
  • Diversity for inclusivity.
  • Attention to bias.
  • Respect for others/persons.
  • I will respect human dignity.
  • I will agitate my organizations and colleagues to act ethically and respect human dignity.

What has been done so far?


We conducted a preliminary scan in the Data for Democracy community, by posting discussion questions on Slack and Twitter, and collecting feedback and input from our 2,000-plus members. We then identified recurring themes that our community members highlighted as important, and arranged these in a systematic framework. This was made by a list of resources addressing the topic.

After that, seven groups of work, formed by volunteers are doing an in-depth discussion of each topic area. Each group is meeting once every 2 weeks, for 1 to 2 hours each time to convey these analysis. Finally, a selected group of advisors will review the notes and gives feedback.

The aim was to have a draft version by February 6th to be presented at the Data for Good Exchange celebrated in San Francisco (D4GX)

The topic areas that the volunteers are discussing are:

  • Data Ownership and Provenance
  • Bias and Mitigation
  • Responsible Communications
  • Privacy and Security
  • Transparency and Openness
  • Questions and Answers
  • Thought Diversity

How can you contribute with ideas?


In order for this process to be as transparent and open as possible, we are making use of GitHub to collect ideas and suggestions from the community as a whole. This provides a quick and easy way for you to do the following-

  1. Submit titles or links of literature/resources that you have found useful in thinking about ethics and data science. You can do so by forking the repo and making the change in your text editor and submitting a pull request or directly in the resources.md file.
If you are not familiar with GitHub. (.md stands for Markdown, a way to format writing on the web easily-cheatsheet link below). If you click on the link, it’ll take you to a page that only loads the resources.md file. On the right side, above the beginning of the document, there is a button with a pencil icon. This allows you edit the file and add your links without having to fork the repo to your computer, use the command line, or any other thing but the content management system built into GitHub.
  1. Submit links including comments on what matters to you when creating a data science code of ethics. Open a pull request from your forked repo.
If you click the “Commit changes” button when using GitHub's content management system, you’ll get a notification from Github saying that you can’t directly contribute to the code so it has forked the repository for you and has made your changes on your branch. That might sound a bit confusing, it only means that we aren’t allowed to directly change the existing/live code or document without going through the process that verifies any proposed changes. In order for this to happen, everything is copied to your GitHub account and you make the changes on your GitHub account. If you want the changes to appear in the main project page/repo, you need to submit a pull request by following steps provided.
  1. Browse all the suggestions, comments and links submitted by your fellow community members in the Discussions section. All results of these contributions can be viewed by the public, here and in the master version of the resources.md file, including you.

  2. Indicate which suggestions and comments you agree with through voting via 👍 or 👎 emojis in the Discussions section located here.

The setup of GitHub presents you with all suggestions. However, please keep in mind that none of these suggestions are mutually exclusive; we are not pitting ideas against each other or using the number of votes to eliminate suggestions. We are simply using this as one convenient metric to determine which ideas have the most resonance in the community, or which resources/literature have been useful to a large number of people. If all suggestions presented are important to you, feel free to :thumbsup: all of them. If you have a more detailed response or would like to express your thoughts on someone else's idea, you can submit this comment in the Ethics Team's Discussion area or comment directly on their open pull request.

This document is written and edited using GitHub flavored Markdown. It's not scary, it's very simple and they even provide a cheatsheet which you can find here.

Shout out to Ashley Blewer's blog post for breaking down the pull request so succinctly.