0

RASE #4 DAO Democracy with Ralph Merkle

Fourth episode of the Reason and Science [Eventually] Podcast, June 22, 2016.

Featured in this episode:
- VidCon 2016 (my article on the 2015 experience http://www.federicopistono.org/blog/vidcon-motivation-inspiration-fascin...)
- Conversation with Ralph Merkle on DAO Democracy at the Distributed Autonomous Society's meetup in Palo Alto. We discussed Ethereum, Blockchain, Mars Space Travels, Prediction Markets and the Future of Democracy http://www.meetup.com/Decentralized-Autonomous-Society-Meetup-Palo-Alto/...

Subscribe and download at: http://federicopistono.org/podcast
RSS feed https://feeds.feedburner.com/RASEpodcast
iTunes https://itunes.apple.com/podcast/reason-science-eventually/id684001161

On the Importance of Pattern Recognition

I’ve been thinking a lot lately. I observe myself staring into the void, or looking at people’s faces, movements, behaviors. I listen to their words, and I have a strange and distant feeling of “outerness". But what am I thinking about exactly?

I think about thought.

In particular, I ask myself the reason we do anything. Really, why do we do anything? Why do we wake up, grab a cup of coffee, have children, work, watch films, take hikes, why do we do anything at all, as opposed to nothing? I’ve been so caught up with the everyday TODOs that sometimes I get the feeling I'm moving in autopilot mode, but I don’t really question why I’m doing what I’m doing.

I believe this to be one of the fundamental questions of existence.

The first answer that came to mind is evolutionary, and it’s probably the most obvious one. Certain instincts, physical and behavioral traits were selected for by the process of evolution, and now we exhibit them, without necessarily having a reason, other than random chance, natural selection, and time.

But then I thought about it some more. I came to the conclusion that life is about patterns, and living beings value pattern recognition more than anything else.

Think about it. What makes a gazelle successful? It must spot lions and other threats effectively and efficiently, react in a split second without wasting energy. Based on the limited information it has available at any moment, it must act accordingly. Spotting the lion requires sophisticated vision, auditory, and potentially olfactory systems, all of which are intensely focused on recognizing patterns, and raising the alarm when a specific one is spotted. Activating the muscles and beginning the complex process of moving four coordinated limbs to propel the entire body forward while staying in balance is another case of pattern recognition and execution, coupled with a feedback loop of the body’s response, which leads to another state, which requires more pattern recognition and so forth. In algorithmic terms, it’s a recursive function (albeit simplified).

What makes a person successful?

How to Create a Malevolent Artificial Intelligence

For those of you who have been following my work, it should come as no surprise that I have an ambivalent view of technology.

Technology is arguably the predominant reason that we live safer, longer, and healthier than ever before, particularly when we include medical technology – sanitation, antibiotics, vaccines – and communication technologies – satellites, the internet, and smartphones. It has immense potential, and it has been the driving force for innovation and development for centuries.

But it has a dark side. Technology, once a strong democratizing force, now drives more inequality. It allows governments and corporations to spy on citizens on a level that would make Orwell's worst nightmares look like child's play. It could lead to a collapse of the economic system as we know it, unless we find, discuss, and test new solutions.

To a certain extent, this is already happening, albeit not in a uniformly distributed fashion. If we consider a longer timeframe – perhaps a few decades – things could get far more worrisome. I think it's worth thinking and preparing sooner, rather than despair once it's too late.

Many distinguished scientists, researchers, and entrepreneurs have expressed such concerns for almost a century. On January 2015 dozens, including Stephen Hawking and Elon Musk, signed an Open Letter, calling for concrete research on how to prevent certain potential pitfalls, noting that, "artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which cannot be controlled".

And this is exactly what Roman Yampolskiy and I explored in a paper we recently published, titled Unethical Research: How to Create a Malevolent Artificial Intelligence.

Cybersecurity research involves investigating malicious exploits as well as how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine.

It seemed rather odd to us that virtually all research so far had been focused preventing the accidental and unintended consequences of an AI going rogue – i.e. the paperclip scenario. While this is certainly a possibility, it's also worth considering that someone might deliberately want to create a Malevolent Artificial Intelligence (MAI). If that were the case, who would be most interested in developing it, how would it operate, and what would maximize its chances of survival and ability to strike?

Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species.

This includes the creation of an artificial entity that can outcompete or control humans in any domain, making humankind unnecessary, controllable, or even subject to extinction. Our paper provides some general guidelines for the creation of a malevolent artificial entity, and hints at ways to potentially prevent it, or at the very least to minimize the risk.

We focused on some theoretical yet realistic scenarios, touching on the need for an international oversight board, the risk posed by the existence of non-free software on AI research, and how the legal and economic structure of the United States provides the perfect breeding ground for the creation of a Malevolent Artificial Intelligence.

I am honored to share this paper with Roman, a friend and a distinguished scientist who published over 130 academic papers and has contributed significantly to the field.

I hope our paper will inspire more researchers and policymakers to look into these issues.

You can read the full text at: arxiv.org/abs/1605.02817: Unethical Research: How to Create a Malevolent Artificial Intelligence.

News coverage:

Understanding the Refugee Crisis in Syria and Europe

I am receiving tons of messages about my last social media posts on the crisis in Syria, the response of the various states (European or not), the responsibilities and the consequences.

I am creating a course trying to make sense of all this, collecting and selecting the best resources to add.

Watch the course: http://bit.ly/konoz-refugee-crisis

If you have any video to suggest, feel free to add a comment.

Announcing a New Project: Eternally Curious

Good news, everyone! I've been meaning to do this for at least five years, and today I'm so happy to finally announce it to the world. It's a new video series of highly curated and well-produced content called Eternally Curious, where I explore all things I'm interested in.

I'm kicking off the season with this first video: "Why Are People Stupid?".


Video link: https://youtu.be/BzzaIezXS8w

I hope you'll enjoy it.

Remember to subscribe to my YouTube channel here: http://youtube.eternallycurio.us
And to support me on konoz! https://konoz.io/fede

On Trust

Trust. It's a strange feeling.

Being trusting of others is my default state. I assume people are generally OK, and that they act in selfish or in deliberately evil manners only out of necessity or in extreme cases of boredom. I know that this includes millions of variations and possibilities, but still, I like to generally assume I can trust strangers.

All of this can change of course in a matter of seconds. I like this quote from Mike Tyson (paraphrased):

Everybody has a plan until they get punched in the face.

That punch in the face can come at any time, and it typically does when you least expect it.

I was on a plane from Los Angeles to London today. I sat down, got my things set up, and briefly went to the bathroom. I come back two minutes later, only to find that my iPad was gone. Disappeared. The iPad was no longer. It was an ex-iPad.

The moment you realize you have been fucked and that you have no control over things, a torrent of emotions comes rushing to your head. First you try to remember the details before the fact. Did you really have the iPad there? Yes, you put in in the pocket in front of your seat. Was it not inside the bag? Pretty sure it wasn't, but check the bag, just to be sure. Gosh, I shouldn't have gone to the bathroom while people were still arriving and sitting down. Did you backup the photos and videos you took? $700 down the toilet for taking a leak kind of burns, but the photos! Those are memories, money is replaceable. What about that blog post you wrote? Did you back that up? You should back up more often...

Then comes the suspicion. You are sure: it was there, and somebody took it. Who could have done that? Maybe it was just a bored teenager. Maybe it was an asshole who wanted a new shiny screen to watch bad blockbuster movies on and read the daily mail. Did you have a code on the lock screen? How difficult was it? Only 4 numbers, stupid Apple security, a monkey could crack that in a few hours. Not that it matters, they'll probably format it before even trying to open it. Is the find my iPad activated? Was it in airplane mode? If so, it's of no use. Who could have done that? Look around you. Maybe this guy. Or that woman, she looks awfully suspicious. That's what you get for trusting people so much. Pretty stupid move. Don't they have security cameras or something? Always there to harass us for this false sense of security, maybe they can turn out to be useful for once.

While this is going on, you start to think that you might be getting paranoid for no reason. Maybe it was just displaced. Go to the flight attendant, and ask if someone has found it.

As I walk back to my seat, I hear the announcement going off on the intercom, and I get the gaze of the person sitting next to me. Excuse me, have you seen an iPad with a red cover? I left it here, maybe it fell under the seat or something? "No, haven't seen anything, sorry", he says.

As I go through the emotional roller coaster I try to step back and think this through rationally. There is no point in worrying or getting emotional. You're more likely to think straight if you don't get carried away. And if you don't find it, that's it. Move on. They're not going to search 400 passengers to retrieve your lost iPad, get over it.

"Is this yours?", says the guy next to me, as he hands me my lost treasure. Speechless, I hesitate. "Yes", I utter tentatively, "I never check the pocket in front of the seat", he adds.

"Thanks", I sigh in relief.

As I collect my thoughts on what just happened, I can quite literally feel my brain shifting state, and giggle at the double 180 degree change in world view my mind has gone through in less that 10 minutes.

Then it hits me. I remember his face when I came back from the bathroom. He was staring at me. It was a mix of surprise and terror. You didn't register it immediately, but you noticed, then got distracted when you found out that your iPad was missing, and couldn't think straight anymore. You saw him taking a good look around the seats and bags, or at least pretending to, while the hostess was making the announcement. Then you remembered his words when you asked the second time, "When you sat down, did you notice if there was an iPad, or was it already gone?", "I didn't see anything, I wish I could help you, I didn't take it", "Of course, I was just trying to pinpoint at which point it disappeared", I conclude.

But there is something bugging me. How come he couldn't find it, after I asked him twice, and it was right in front of him? It was right there. How could he miss it? Maybe he thought someone had left it there and then he took it, hoping nobody would come to claim it. Then when he saw me he got scared, he tried to keep it hidden, then realized I was eventually going to find out, and looked for a way to return it while making it look like he didn't know it was there.

Sneaky bastard.

Wait a minute. Where is this coming from? This isn't you. You trust people. Maybe he was being honest. If it's true that he never checks the pocket where the airplane magazines are stored, then his story checks out. No mystery, no evil intent.

But then how did the iPad get into his seat's pocket? Are you sure you didn't displace it yourself, and you just don't remember?

At this point, I realize there is no point in going any further with this, it's an infinite spiral from which you can never get out.

Little by little, all these experiences shape us and make us who we are. The more we allow fear and suspicion to take a hold of us, the more we become alienated and we distance ourselves from others.

The challenge is to remain open, and to not let negativity take over. That's a lesson that we need to deal with every day.

VidCon: Motivation. Inspiration. Fascination.

How would you describe your experience of in three words? It's a question that I find myself asking more frequently, both to myself and to other people, and at every iteration the interest and the expectation grows accordingly.

It forces you to think, reflect, and internalize emotions and situations that would otherwise pass by you, forever out of reach, evanescent, fleeting entities that disappear the moment you experience them.

And so I ask that question.

I savor the moment when I look into the eyes of my interlocutor, shining as they move to the upper left, a sign that they are accessing that part of the creative brain that creates new, spectacular pathways into the synaptic connectome of their mind. And I await with a smile of satisfaction, knowing that they are creating new memories, that this process of voluntary reflection will help them solidify what they have experienced, thus appreciate it more deeply.

You can tell when they are making the effort, walking that extra step that is undoubtably more difficult, but that pays off exponentially more than simply glazing over and answering in autopilot. Then comes the sudden epiphany, thoughts have been processed, memories formed, and the smile becomes contagious, as they become finally aware of what they have been missing out until the moment you changed their mindset and forced them to look at themselves under a different perspective. Words have been attached to these new structures, and the act of voicing them will reinforce them, like building a solid foundation from which cathedral and castles are erected, in all their splendor and immensity.

Now comes my favorite part. Will they open the doors of their mental cathedral with you, thus sharing a commons space, and quite literally opening up to you, making themselves vulnerable? A quite challenging and scary thought, albeit no less rewarding than the previous one.

I thoroughly enjoy walking into new buildings – be them humble houses or majestic skyscrapers – all made of mind-stuff.

Excitement. Expectation. Vulnerability. But also exploration, openness, and connection.

And so I ask that question. And I eagerly wait to see the building that will be created before my eyes. Minds. Engines of creation.

In a way, isn't that what all great art does?

Billionaire Johann Rupert Worried by AI and Unemployment, Urges People to Read "Robots Will Steal Your Job, But That's OK"

See video

Multi-Billionaire Johann Rupert, CEO of luxury giant Richemont takes a stance against the growing wealth gap, calling it 'unfair' and 'unsustainable', and urges people to read my book "Robots Will Steal Your Job, But That's OK" as proof of the next wave of unemployment brought by Artificial intelligence and automation. Times are changing.

Announcement: I'm creating a course on AI and Robots Stealing Jobs, I will publish more and more videos in the future on konoz: http://bit.ly/konoz-fede-robots-rupert-fb

VISIONEERS, or how to stop complaining and start fixing global problems

It's not everyday that you get to see the future happening right before your eyes. We're so focused on the day to day, paralyzed by uninformative and amygdala-stimulating news reports, that we rarely allow ourselves to take some time off to think about the future of humanity. The challenges we face today seem so out of our reach, and we feel so insignificant, that even when we do ponder about what's coming next, it's no more than a mere intellectual exercise.

However, there are people who not only think about the future constantly, but proactively make plans on how to improve it, and often deliver on the promises. Last week I was privileged enough to be part of such a group at the XPRIZE VISIONEERING conference in Los Angeles.

xprize

Presenting on the XPRIZE stage.

XPRIZE is the child of my dear friend Peter Diamandis, and what this project has accomplished in just a few years is nothing short of extraordinary. The story goes that Peter's childhood dream was to become an astronaut, but he didn't qualify for NASA's standards of physical aptitude. So he decided he would go to space himself.

Most people would stop at that thought, knowing that it would remain a child's dream and nothing more. Then again, Peter is not like most people. He was so determined to go to space so much that over the past twenty years he almost single-handedly rekindled global interest for space exploration. The 1996 Ansari XPRIZE – a $10-million prize awarded to the first privately financed team that could build and fly a three-passenger vehicle 100 kilometers into space twice within two weeks – was the reason that led Richard Branson to start Virgin Galactic and his private space enterprise, and many say it gave Elon Musk the inspiration to pursue Space X.

xprize

Since then, XPRIZE has become the new standard for disrupting innovation in areas where things had been stagnating for decades, either due to market failures or because of circumstances beyond any individual's control. The concept is simple: put out a $10/$20 million prize for the first team to do X, x being whatever currently unresolved challenge humanity is facing. Many teams compete in a friendly "coopetition", but only the best wins. The genius idea behind this approach is that the total amount of capital spent and value generated is much greater than the prize to be won. Teams collectively spend huge amounts of money, sometimes hundreds of millions of dollars, in the off-chance of taking home the $10 million prize. But in the process, they jumpstart in their country and community an ecosystem of innovation in a sector that had been stagnating for years. The winners will open source their technology for the benefit of all humanity.

xprize

Since its creation, XPRIZE projects include:

  • super-efficient vehicles that achieve 100 MPGe (2.35 liter/100 kilometer) efficiency, produce less than 200 grams/mile well-to-wheel CO2 equivalent emissions, and could be manufactured for the mass market
  • successfully launching, landing, and operating a rover on the lunar surface.
  • doubling the industry's previous best oil recovery rate tested in controlled conditions by exceeding 2500 gallons per minute (with at least 70% efficiency of oil collected over water)
  • a mobile device that can diagnose patients better than or equal to a panel of board certified physicians
  • free Android apps to spread reading, writing, and arithmetic skills, and prove their effectiveness over an 18-month period in African pilot communities

The list keeps growing every year.

So how do they decide what the next XPRIZE is going to be? Every year the team organizes in Los Angeles a two-day retreat called, quite appropriately, VISIONEERING. In the spirit of friendly coopetition, visioneers form teams and compete for the best idea, voting democratically at each stage. Some of these ideas might go on to become the next XPRIZE.

This year I was asked to lead the Future of Work session as visiting expert.

XPRIZE Visioneering: Future of Work

Each year, brilliant scientists, philanthropists, heads of innovation, and corporate leaders gather for a multi-day Visioneering workshop to brainstorm, debate, and prioritize which of the world's Grand Challenges might be solved through incentivized prize competition.

This year’s Visioneering takes place May 7-8 in Rancho Palos Verdes, CA, where attendees compete with one another to design and pitch innovative, incentivized prize concepts across a variety of Grand Challenge areas in the hopes that theirs would become the next XPRIZE launched. (The $10M Qualcomm Tricorder XPRIZE was one such past winner that emerged from a Visioneering workshop.)

I am so incredibly humbled that the XPRIZE has asked me to lead the THE FUTURE OF WORK team. If you have been following my research, I don't need to remind you that as much as 50% of jobs in the US and Europe are at risk of being lost to automation in the next decade or two.

What are the risks and opportunities created by technological unemployment? How will we prepare a workforce when jobs are scarcer, require more skill, and people work and live for decades longer than they used to? What are the opportunities to make work more rewarding and enjoyable? How can XPRIZE competitions ease this transition in society?

These are the questions that we will try to answer next week in Los Angeles, alongside some of the smartest and most incredible people on the planet.

Visioneering is where ideas compete. Throughout the experience, attendees pitch their ideas to each other and vote to advance the strongest concepts. Visioneering culminates with the award of the Grand Prize to the winning prize concept. The XPRIZE team then works with the attendees who created the concept to develop it into an XPRIZE competition that has the potential to be launched and awarded.

Let the best idea win. Whatever comes out, it will be a win for all of humanity.

Syndicate content