World ethical data foundation publishes “Me-We-It: Open standard for responsible AI”
The World Ethical Data Foundation has published an open standard for responsible AI designed to “clarify the process of building AI by exposing the steps that go into creating it responsibly”.
The Open Suggestions Framework is a free online forum allowing users, developers, technologists, data scientists and others to build upon suggestions, approaches and share knowledge, clarifying processes and information relating to the build, testing and development of AI to ensure accountability without restricting innovation.
It is hoped the Framework will act as considerations that will be used by everyone to ensure ethical development of AI, evolving and updating with suggestions and amendments and setting an actionable standard.
Here are some key points to note:
- Me-We-It is defined as:
Me – The questions each individual who is working on the AI should ask themselves before they start and as they work through the process.
We – The questions the group should ask themselves — in particular, to define the diversity required to reduce as much human bias as possible.
It – The questions we should ask individuals and the group as they relate to the model being created and the impact it can have on our world.
- These are split into 3 key steps and the Me-We-It questions to be asked at each Step are:
Me – what are the questions asked of you and how do the answers evolve?
We – what are the questions asked of the group and how do the answers evolve?
It – what are the questions asked of the algorithm and how do the answers evolve?
- Training – Data Selection and Ingestion
- Building – Creation or Selection of Algorithms
- Testing – Managing Test Data and Tagging
The new open suggestions framework is available for immediate use and is hope will encourage developers, users and other interested parties to engage and develop the framework to provide a consistent standard for responsible AI and promoting discussion and engagement throughout the life cycle of the process.
It is designed to ensure accountability for the development of AI, providing an open and transparent source code repository with open access and contribution.
Other companies have their own standards for AI currently and this is hoped to bring a standardised, developing way of working with AI with openness and transparency allowing suggestions, contributions and achieving better AI solutions. This is a new framework and the engagement remains to be seen.
Read the other topical articles from our Summer Data Protection update:
- The American dream has come true for EU businesses that export personal data to the US
- A (data) bridge to… the US: How the EU’s American Dream will extend to the UK
- UK information commissioner’s office publishes new guidance on the use of PETs
- ICO publishes journalism code of practice
Read the other topical articles from our Local Government Autumn update:
- A Bird in the Hand…
- Procurement Cases: The beat goes on!
- Important Holiday Pay Decision
- As easy as falling off a [Of] Log? – What the new regime will mean for Local Authorities
- The 4 day week and DLUHC’s approach-STOP PRESS
The content of this page is a summary of the law in force at the date of publication and is not exhaustive, nor does it contain definitive advice. Specialist legal advice should be sought in relation to any queries that may arise.
‘Doing the right thing’ is at the heart of Freeths. Find out more about our excellent client service and the strong set of values that guide the way we work.
Talk to us
Freeths are a leading national law firm with 13 offices across the UK. If you have a query about our services or just want to find out more, why not give us a call?
Contact: 03301 001 014