R S S : Joi Ito’s Web
PageRank : 3 %
RSS FEED READER
There seems to be some sort of general rule that technologies and systems like conversations on the Internet, the US democracy (and its capture by powerful financial interests), the Arab Spring movement and many other things that were wonderfully optimistic and positive at the beginning seem to begin to regress and fail as they scale or age. Most of these systems seem to evolve into systems that are resistant to redesign and overthrow as they adapt like some sophisticated virus or cancer. It's related to but harder to fix than the tragedy of the commons.
I want to write a longer post trying to understand this trend/effect, but I was curious about whether there was some work already in understanding this effect and whether there was already a name for this idea. If not, what we should call it, assuming people agree that it's a "thing"?
Iyad Rahwan was the first person I heard use the term society-in-the-loop machine learning. He was describing his work which was just published in Science, on polling the public through an online test to find out how they felt about various decisions people would want a self-driving car to make - a modern version of what philosophers call "The Trolley Problem." The idea was that by understanding the priorities and values of the public, we could train machines to behave in ways that the society would consider ethical. We might also make a system to allow people to interact with the Artificial Intelligence (AI) and test the ethics by asking questions or watching it behave.
Society-in-the-loop is a scaled up version of human-in-the-loop machine learning - something that Karthik Dinakar at the Media Lab has been working on and is emerging as an important part of AI research.
Typically, machines are "trained" by AI engineers using huge amounts of data. The engineers tweak what data is used, how it's weighted, the type of learning algorithm used and a variety of parameters to try to create a model that is accurate and efficient and making the right decisions and providing accurate insights. One of the problems is that because AI, or more specifically, machine learning is still very difficult to do, the people who are training the machines are usually not domain experts. The training is done by machine learning experts and the completed model after the machine is trained is often tested by experts. A significant problem is that any biases or errors in the data will create models that reflect those biases and errors. An example of this would be data from regions that allow stop and frisk - obviously targeted communities will appear to have more crime.
Human-in-the-loop machine learning is work that is trying to create systems to either allow domain experts to do the training or at least be involved in the training by creating machines that learn through interactions with experts. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the human perspective of the data. Karthik calls this process 'lensing', of extracting the human perspective or lens of a domain expert and fit it to algorithms that learn from both the data and the extracted lens, all during training time. We believe this has implications for making tools for probabilistic programming and for the democratization of machine learning.
At a recent meeting with philosophers, clergy and AI and technology experts, we discussed the possibility of machines taking over the job of judges. We have evidence that machines can make very accurate assessments of things that involve data and it's quite reasonable to assume that decisions that judges make such as bail amounts or parole could be done much more accurately by machines than by humans. In addition, there is research that shows expert humans are not very good set setting bail or granting parole appropriately. Whether you get a hearing by the parole board before or after their lunch has a significant effect on the outcome, for instance. (There has been some critiques of the study cited in this article, and the authors of the paper of responded to them.)
In the discussion, some of us proposed the idea of replacing judges for certain kinds of decisions, bail and parole as examples, with machines. The philosopher and several clergy explained that while it might feel right from a utilitarian perspective, that for society, it was important that the judges were human - it was even more important than getting the "correct" answer. Putting aside the argument about whether we should be solving for utility or not, having the buy-in of the public would be important for the acceptance of any machine learning system and it would be essential to address this perspective.
There are two ways that we could address this concern. One way would be to put a "human in the loop" and use machines to assist or extend the capacity of the human judges. It is possible that this would work. On the other hand, experiences in several other fields such as medicine or flying airplanes have shown evidence that humans may overrule machines with the wrong decision enough that it would make sense to prevent humans from overruling machines in some cases. It's also possible that a human would become complacent or conditioned to trust the results and just let the machine run the system.
The second way would be for the machine to be trained by the public - society in the loop - in a way that the people felt that that the machine reliability represented fairly their, mostly likely, diverse set of values. This isn't unprecedented - in many ways, the ideal government would be one where the people felt sufficiently informed and engaged that they would allow the government to exercise power and believe that it represented them and that they were also ultimately responsible for the actions of the government. Maybe there is way to design a machine that could garner the support and the proxy of the public by being able to be trained by the public and being transparent enough that the public could trust it. Governments deal with competing and conflicting interests as will machines. There are obvious complex obstacles including the fact that unlike traditional software, where the code is like a series of rules, a machine learning model is more like a brain - it's impossible to look at the bits and understand exactly what it does or would do. There would need to be a way for the public to test and audit the values and behavior of the machines.
If we were able to figure out how to take the input from and then gain the buy-in of the public as the ultimate creator and controller of this machine, it might solve the other side of this judicial problem - the case of a machine made by humans that commits a crime. If, for instance, the public felt that they had sufficient input into and control over the behavior of a self-driving car, could the public also feel that the public, or the government representing the public, was responsible for the behavior and the potential damage caused by a self-driving car, and help us get around the product liability problem that any company developing self-driving cars will face?
How machines will take input from and be audited and controlled by the public, may be one of the most important areas that need to be developed in order to deploy artificial intelligence in decision making that might save lives and advance justice. This will most likely require making the tools of machine learning available to everyone, have a very open and inclusive dialog and redistribute the power that will come from advances in artificial intelligence, not just figure out ways to train it to appear ethical.
This year, the Shuttleworth Foundation asked me to be the honorary steward of the September 2016 fellowship intake. This meant that I would help review and recommend the people who would receive the Shuttleworth Fellowship which funds the fellow's salary as well as their project up to $250,000. It's one of the most interesting and successful fellowship programs that I know for funding unique, provocative and unconventional individuals and their ideas. I'm a huge fan.
We saw some great applications and I was really happy with the three fellows selected for the round that I worked on, Achal, Isha and Ugo. Through the process I got to know their work quite well and I was excited to get a chance to meet Isha when I was in New York last week.
Isha Datar works on cellular agriculture research, the science of growing animal projects in cell cultures instead of farmed herds. It's a very new field with a lot of challenges including questions about how to make non-animal based nutrient systems, how to make it taste good, how to make it energy efficient, how to scale it, etc. At her non-profit organization New Harvest, Isha is working on the core research as well as funding and coordinating research across the world. What's exciting and important to me is that she's decided to do this in an open source and collaborative non-profit way because she and her colleagues believe that the field is still very early and that it would be advanced most effectively through this non-profit structure.
Neha Narula wrote a post on Medium last Monday about the MIT Digital Currency Initiative at the Media Lab (DCI) and her new role as the Research Director. Also on Monday, TED posted her talk on the future of money, which I think is one of clearest "what is Bitcoin" explanations I've seen. I saw her a few days later and did a Facebook Live conversation with her which I've uploaded to YouTube, SoundCloud and iTunes.
Neha has been working as a member of the DCI for awhile now, but in this new role, she will drive the technical research agenda of the DCI and help coordinate research inside of MIT as well as in other academic institutions and in the broader community. She comes with a solid technical background with a PhD from MIT in distributed systems and previously as a software engineer at Google. Neha and the DCI have already been actively engaged in research, development and teaching in digital currencies, blockchain and related fields, but with Neha's leadership, I'm hoping that we can continue to ramp these efforts up as well as increase collaboration and engagement.
Neha lead the creation of a website for the DCI where you can learn about some of the projects and people involved. Also, as I wrote in a Medium post on September 6, Brian Forde, the director of the DCI will be transitioning out of that role.
Copyright xkcd CC BY-NC
Back when I first started blogging, the standard post took about 5 min and was usually written in a hurry after I thought of something to say in the shower. If it had mistakes, I'd add/edit/reblog any fixes.
As my post have gotten longer and the institutions affected by my posts have gotten bigger, fussier and more necessary to protect - I've started becoming a bit more careful about what I say and how I say it.
Instead of blog first, think later - agile blogging - I now have a process that feel a bit more like blogging by committee. (Actually, it's not as bad as it sounds. You, the reader are benefiting from better thought through blog posts because of this process.)
When I have an idea, I usually hammer out a quick draft, stick it in a Google Doc and then invite in anyone that might be able to help including experts, my team working on the particular topic and editors and communications people. It's a different bunch of people depending on the post, but almost everything I've posted recently is a result of a group effort.
Jeremy Rubin, a recent MIT grad who co-founded the Digital Currency Initiative at MIT mentioned that maybe I should be giving people credit for helping - not that he wouldn't help if he didn't get credit, but he thought that as a general rule, it would be a good idea. I agreed, but I wasn't sure exactly how to do it elegantly. (See what I did here?)
I'm going to start adding contributors at the bottom of blog posts as sort of a "credits" section, but if anyone has any good examples or thoughts on how to give people credit for helping edit and contributing ideas to a post or an informal paper like my posts on my blog and pubpub, I'd really like to see them.
Sultan is the most interesting person I know in the United Arab Emirates. I met him in 2010 or so, soon after I had moved to Dubai. He had just been asked to "take a break" from his job as a journalist at The National, the main national newspaper, for being controversial. I helped him get started on Twitter and he taught me about the culture and politics of region.
He is now a Director's Fellow at the Media Lab and a good friend and advisor.
I recently had the opportunity to catch up with him and get an update and some overviews about the region - Arab Spring, arts, politics, media, culture.
Martin Nowak runs the Program for Evolutionary Dynamics at Harvard. At a recent meeting at his Lab, I heard him describe the history of life on earth in fascinating way using evolutionary dynamics. At another meeting over dinner, Danny Hillis and he disagreed on whether you could model the universe on a Turing machine - in other words, can we simulate our "run" our brains or the universe digitally.
Daiko Matsuyama is the Deputy Head Priest of the Taizoin Zen Buddhist Temple. Tenzin Priyadarshi is the president and CEO of the Dalai Lama Center for Ethics and Transformative Values at MIT and Director of the Ethics Initiative at the MIT Media Lab.
The three of us are all friends but had never had met together so we decided to try a 3-way Skype streamed on Facebook Live to talk about Daiko's new book he was asking me to blurb. Unfortunately, the book is only in Japanese so far.
Julia Reda is a Member of the European Parliament representing Germany, and she also serves as a Vice-President of the Greens/EFA group, president of the Young Pirates of Europe and a member of the Pirate Party of Germany.
She is was the rapporteur of the Parliament's review of 2001's Copyright Directive.
We set a Skype call and some of the EU's secret conversations about copyright leaked just as the call was starting so we used this as an opportunity to talk about some of the crazy copyright laws being proposed and passed in Europe right now.
Bob is the most cited engineer in the history of the world. He is an MIT Institute Professor (there are usually only 12). He is also (lucky for me), a friend and a great mentor of mine since I met him in 2013 at my first Red Sox game with David Lucchino who introduced us and invited us to the game.
Bob is a great example and mentor for so many people. I recently got a chance to catch up with him and hear about his story and talk about things like peer review and the future of science. I streamed it using my Mevo to Facebook Live and then posted a cleaner video to YouTube and audio to SoundCloud and iTunes.
0 | 10