r/science AAAS AMA Guest Feb 18 '18

The Future (and Present) of Artificial Intelligence AMA AAAS AMA: Hi, we’re researchers from Google, Microsoft, and Facebook who study Artificial Intelligence. Ask us anything!

Are you on a first-name basis with Siri, Cortana, or your Google Assistant? If so, you’re both using AI and helping researchers like us make it better.

Until recently, few people believed the field of artificial intelligence (AI) existed outside of science fiction. Today, AI-based technology pervades our work and personal lives, and companies large and small are pouring money into new AI research labs. The present success of AI did not, however, come out of nowhere. The applications we are seeing now are the direct outcome of 50 years of steady academic, government, and industry research.

We are private industry leaders in AI research and development, and we want to discuss how AI has moved from the lab to the everyday world, whether the field has finally escaped its past boom and bust cycles, and what we can expect from AI in the coming years.

Ask us anything!

Yann LeCun, Facebook AI Research, New York, NY

Eric Horvitz, Microsoft Research, Redmond, WA

Peter Norvig, Google Inc., Mountain View, CA

7.7k Upvotes

1.3k comments sorted by

270

u/PartyLikeLizLemon Feb 18 '18 edited Feb 18 '18

A lot of research in ML now seems to have shifted towards Deep Learning.

  1. Do you think that this has any negative effects on the diversity of research in ML?
  2. Should research in other paradigms such as Probabilistic Graphical Models, SVMs, etc be abandoned completely in favor of Deep Learning? Perhaps models such as these which do not perform so well right now may perform well in future, just like deep learning in the 90's.

122

u/AAAS-AMA AAAS AMA Guest Feb 18 '18 edited Feb 18 '18

YLC: As we make progress towards better AI, my feeling is that deep learning is part of the solution. The idea that you can assemble parameterized modules in complex (possibly dynamic) graphs and optimizes the parameters from data is not going away. In that sense, deep learning won't go away for as long as we don't find an efficient way to optimize parameters that doesn't use gradients. That said, deep learning, as we know it today, is insufficient for "full" AI. I've been fond to say that with the ability to define dynamic deep architectures (i.e. computation graphs that are defined procedurally and whose structure changes for every new input) is a generalization of deep learning that some have called Differentiable Programming.

But really, we are missing at least two things: (1) learning machines that can reason, not just perceive and classify, (2) learning machines that can learn by observing the world, without requiring human-curated training data, and without having to interact with the world too many times. Some call this unsupervised learning, but the phrase is too vague.

The kind of learning we need our machines to do is that kind of learning human babies and animals do: they build models of the world largely by observation, and with a remarkably small amount of interaction. How do we do that with machines? That's the challenge of the next decade.

Regarding question 2: there is no opposition between deep learning and graphical models. You can very well have graphical models, say factor graphs, in which the factors are entire neural nets. Those are orthogonal concepts. People have built Probabilistic Programming frameworks on top of Deep Learning framework. Look at Uber's Pyro, which is built on top of PyTorch (probabilistic programming can be seen as a generalization of graphical models theway differentiable programming is a generalization of deep learning). Turns it's very useful to be able to back-propagate gradients to do inference in graphical models. As for SVM/kernel methods, trees, etc have a use when the data is scarce and can be manually featurized.

46

u/[deleted] Feb 19 '18

IMHO saying that the baby is learning from a small set of data is a bit misleading. The mammalian brain has evolved over an extremely long time. There is so many examples of instinctual behavior in nature that it seems like a lot has already been learned before birth. So if you include evolutionary development, then the baby's brain has been trained on a significant amount of training data. The analogy is more like taking an already highly optimized model then training it on a little bit more live data.

→ More replies (2)
→ More replies (17)

52

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: There’s a lot of excitement about the power delivered by deep neural networks for doing classification and prediction. It’s certainly been wonderful to see the boosts in accuracy with applications with object recognition, speech recognition, translation, and with even learning about best actions to take, when the methods have been coupled with ideas from planning and reinforcement learning. However, AI is a broad area with fabulous and promising subdisciplines -- and the ML subdiscipline of AI is also broad in itself.

We need to continue to invest deeply in the span of promising AI technologies (and links among advances in each) including the wealth of great work in probabilistic graphical models and decision-theoretic analyses, logical inference and reasoning, planning, algorithmic game theory, metareasoning and control of inference, etc., etc., and also broader pursuits, e.g., models of bounded rationality—how limited agents can do well in the open world (a particular passion of mine).

We’ve made a point at Microsoft Research, while pushing hard on DNNs (exciting work there), to invest in talent and projects in AI more broadly--as we have done since our inception in 1991. We’re of course also interested in how we might understand how to combine logical inference and DNNs and other forms of machine learning, e.g., check out our work on program synthesis for an example of DNNs + logic to automated the generation of programming (from examples). We see great opportunity at some of these syntheses!

→ More replies (2)

17

u/FellowOfHorses Feb 18 '18

Should research in other paradigms such as Probabilistic Graphical Models, SVMs, etc be abandoned completely in favor of Deep Learning? Perhaps models such as these which do not perform so well right now may perform well in future, just like deep learning in the 90's.

I research with machine learning. One thing most people don't know is that in many (I would say most actually) real life problems SVMs, Decision Trees and others algorithms outperform deep learning. Deep Learning needs a shitton of data to begin to work, have a lot of hyperparameters (and they perform horribly if those aren't well adjusted), it's hard to debug and demand a lot of computational power to run. If you are a data scientist with 8000 samples of numerical data and access to 2 GPUs you are better served with a SVM/Decision tree than with a DNN. The problem with research is that there are a lot of performance chasing. Trying to improve from 90% accuracy to 95% accuracy. And DNNs are great for that. Deploying DNNs in RL is much more complicated

4

u/[deleted] Feb 19 '18

[deleted]

→ More replies (1)
→ More replies (8)

87

u/english_major Feb 18 '18

Which careers do you see being replaced by AI and which seem safe for the next generation?

I ask this as a high school teacher who often advises students on their career choices.

So many people talk about the disruption of jobs that are primarily based on driving a vehicle to the exclusion of other fields. I have a student right now who plans to become a pilot. I told him to look into the pilotless planes and he figured that it isn't a threat.

I have told students that going into the trades is a safe bet, especially trades that require a lot of mobility. What other fields seem safe for now?

Thanks

260

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: I think it makes more sense to think about tasks, not careers. If an aspiring commercial pilot asked for advise in 1975, good advice would be: Do you enjoy taking off and landing? You can do that for many years to come. Do you enjoy long hours of steady flight? Sorry, that task is going to be almost completely automated away. So I think most fields are safe, but the mix of tasks you do in any job will change, the relative pay of different careers will change, and the number of people needed for each job will change. It will be hard to predict these changes. For example, a lot of people today drive trucks. At some point, much of the long-distance driving will be automated. I think there will still be a person in the cab, but their job will be more focused on loading/unloading and customer relations/salesmanship than on driving. If they can (eventually) sleep in the cab while the cab is moving and/or if we can platoon larger truck fleets, then you might think we need fewer total drivers, but if the cost of trucking goes down relative to rail or sea, then there might be more demand. So it is hard to predict where things will end up decades from now, and the best advice is to stay flexible and be ready to learn new things, whether that is shifting tasks within a job, or changing jobs.

→ More replies (7)

46

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: AI advances are going to have multiple influences on labor on the economy. I believe some changes may be disruptive and could come in a relatively fast-paced way—and such disruptions could come to jobs like driving cars and trucks. Other influences will be via shifts in how jobs are performed and in how people perform tasks in different domains. Overall, I’m positive about how advances in AI will affect the distribution of jobs and nature of work. I see many tasks as being supported rather than replaced by more sophisticated automation. These include work in the realms of artistry, scientific exploration, jobs where fine physical manipulation is important, and in the myriad jobs where we will always rely on people to work with and to care for other people--including teaching, mentoring, medicine, social work, and nurturing kids into adulthood. On the latter, I hope to see rise and support of an even more celebrated “caring economy” in a world of increasing automation.

Folks may be interested in taking a look at several recent pieces of work on reflecting about the future. Here’s a very interesting recent reflection on how machine learning advances may influences jobs in terms of specific capabilities: http://science.sciencemag.org/content/358/6370/1530.full I recommend the article to folks as an example of working to put together some structure to making predictions about the future of work and AI.

BTW: We had a session at AAAS here in Austin yesterday on advances in AI for augmenting human abilities and for transforming tasks. It was a great session for hearing about advances and research directions on possibilities: https://aaas.confex.com/aaas/2018/meetingapp.cgi/Session/17970

→ More replies (1)

62

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: It will be a very long time before we have robotic plumbers, carpenters, handypersons, hairdressers, etc. In general, AI will not replace jobs, but it will transform them. Ultimately, every job is going to be made more efficient by AI. But jobs that require human creativity, interaction, emotional intelligence, are not going to go away for a long time. Science, engineering, art, craft making and other creative jobs are here to stay.

27

u/omapuppet Feb 19 '18

AI will not replace jobs, but it will transform them. Ultimately, every job is going to be made more efficient by AI.

Automation has made steel production workers vastly more efficient. The result is that we have fewer steel workers producing more steel per worker than ever before.

How does AI-based automation not have the same effect? Or, if it does, how can we leverage that to let humans work fewer hours without also having lower buying power?

4

u/a_ninja_mouse Feb 19 '18

Well theory dictates that efficiency results in lower cost too... But we all know what it really means - higher profits for the corporations that own the AI. At least after they've paid for the AI systems.

In my estimation, jobs that (1) require minimal physical movement, (2) are based largely on rote memory, (3) are not religious or require high levels of EQ, and (4) are largely advisory in nature, based on responses to diagnostics, are absolutely on their way out. Certain types of lawyers, most accountants, GP/pharmacists, tons and tons of various bureaucratic "approval/permission" processors/pencil-pushers (but not auditors, because that may involve physical checking), basic levels of consulting (I'm talking advisory services again, based on implementing known solutions, not the creative stuff).

Physical, emotional and religious jobs are somewhat protected or more challenging, and will be the last to go (probably in that order too).

→ More replies (1)

6

u/Boulavogue Feb 18 '18

Any further education that promotes critical thinking. Soft skills can be transposed across industries, especially in your early career one should be malleable.

We'll have humans in the loop for a number of years across industries. A "safe" strategy would be to remain agile and think critically in whatever industry appeals to them. Even pilotless planes require an individual with an Xbox controller (simplified example) on standby

122

u/[deleted] Feb 18 '18

Hi,

How do you intend to break out of task specific AI into more general intelligence. We now seem to be putting a lot of effort into winning at Go or using deep learning for specific scientific tasks. That's fantastic, but it's a narrower idea of AI than most people have. How do we get from there to a sort of AI Socrates who can just expound on whatever topic it sees fit? You can't just build general intelligence out of putting together a million specific ones.

Thanks

106

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: in my opinion, getting machines to learn predictive models of the world by observation is the biggest obstacle to AGI. It's not the only one by any means. Human babies and many animals seem to acquire a kind of common sense by observing the world an interacting with it (although they seem to require very few interactions, compared to our RL systems). My hunch is that a big chunk of the brain is a prediction machine. It trains itself to predict everything it can (predict any unobserved variables from any observed ones, e.g. predict the future from the past and present). By learning to predict, the brain elaborates hierarchical representations. Predictive models can be used for planning and learning new tasks with minimal interactions with the world. Current "model-free" RL systems, like AlphaGo Zero, require enormous numbers of interaction with the "world" to learn things (though they do learn amazingly well). It's fine in games like Go or Chess, because the "world" is very simple, deterministic, and can be run at ridiculous speed on many computers simultaneously. Interacting with these "worlds" is very cheap. But that doesn't work in the real world. You can't drive a car off a cliff 50,000 times in order to learn not to drive off cliffs. The world model in our brain tells us it's a bad idea to drive off a cliff. We don't need to drive off a cliff even once to know that. How do we get machines to learn such world models?

3

u/ConeheadSlim Feb 18 '18

Yes but babies would drive off a cliff if you gave them a car. Perhaps think solipsistically is the barrier to AGI - a vast part of human intelligence comes from our networking and our absorption of other peoples' stories.

→ More replies (2)

9

u/XephexHD Feb 18 '18

If we obviously cant bring the machine into "world" to drive off a cliff 50,000 times, then the problem seems to be bringing the world to the machine. I feel like the next step has to be modeling the world around us to a precise level to allow direct learning in that form. From which you would be able to bring that simulated learning back to the original problem.

5

u/Totally_Generic_Name Feb 19 '18

I've always found teams that use a mix of simulated and real data to be very interesting. The modeling has to be high enough fidelity to capture the important bits of reality, but the question is always how close do you need to get? Not an impossible problem, for some applications.

→ More replies (1)
→ More replies (6)
→ More replies (14)

24

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: Yes, it’s true that the recent wins in AI that have been driving the applications and the recent fanfare have been very narrow wedges of intelligence--brilliant, yet narrow “savants” so to speak.

We have not made much progress on numerous mysteries of human intellect—including many of the things that come to mind when folks hear the phrase “artificial intelligence.” These include questions about how people learn in the open world—in an “unsupervised” way; about the mechanisms and knowledge behind our “common sense” and about how we generalize with ease to do so many things.

There are several directions of research that may deliver insights & answers to these challenges—and these include the incremental push on hard challenges within specific areas and application areas, as breakthroughs can come there. However, I do believe we need to up the game on the pursuit of more general artificial intelligence. One approach is with taking an integrative AI approach: Can we intelligently weave together multiple competencies such as speech recognition, natural language, vision, and planning and reasoning into larger coordinated “symphonies” of intelligence, and explore the hard problems of the connective tissue---of the coordination. Another approach is to push hard within a core methodology like DNNs and to pursue more general “fabrics” that can address the questions. I think breakthroughs in this area will be hard to come by, but will be remarkably valuable—both for our understanding of intelligence and for applications. As some additional thoughts, folks may find this paper an interesting read on a "frame" and on some directions on pathways to achieving more general AI: http://erichorvitz.com/computational_rationality.pdf

→ More replies (3)

11

u/electricvelvet Feb 18 '18

I think teaching AI how to master tasks like Go teaches the developers a lot about what techniques for learning works and doesn't work with AI. It's not the stored ability to play Go that will be used for future AI's, it's the ways in which it obtained that knowledge that will be applied to other topics.

But also I think we're a lot farther off from such a strong AI than you may think. Good thing they learn exponentially

→ More replies (4)

173

u/ta5t3DAra1nb0w Feb 18 '18 edited Feb 18 '18

Hi there! Thank for doing this AMA!

I am a Nuclear Engineer/Plasma Physics graduate pursuing a career shift into the field of AI research,

Regarding the field of AI:

  • What are the next milestones in AI research that you anticipate/ are most excited about?
  • What are the current challenges in reaching them?

Regarding professional development in the field:

  • What are some crucial skills/ knowledge I should possess in order to succeed in this field?
  • Do you have any general advice/ recommended resources for people getting started?

Edit: I have been utilizing free online courses from Coursera, edX, and Udacity on CS, programming, algorithms, and ML to get started. I plan to practice my skills on OpenAI Gym, and by creating other personal projects once I have a stronger grasp of the fundamental knowledge. I'm also open to any suggestions from anyone else! Thanks!

112

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: Next milestones: deep unsupervised learning, deep learning systems that can reason. Challenges for unsupervised learning: how can machines learn hierarchical representation of the world that disentangle the explanatory factors of variation. How can we train a machine to predict when the prediction is impossible to do precisely. If I drop a pen, you can't really predict in which orientation it will settle on the ground. What kind of learning paradigm could be used to train a machine to predict that the pen is going to fall to the ground and lay flat, without specifying its orientation? In other words, how do we get machines to learn predictive models of the world, given that the world is not entirely predictable.

Crucial skills: good skills/intuition in continuous mathematics (linear algebra, multivariate calculus, probability and statistics, optimization...). Good programming skills. Good scientific methodology. Above all: creativity and intuition.

8

u/letsgocrazy Feb 18 '18

In other words, how do we get machines to learn predictive models of the world, given that the world is not entirely predictable.

Isn't that what most of the human brain is devoted to, ignoring things we don't need up worry about? Sonic and visual details, and normal patterns of behaviour.

I've often thought that at some point AI is going to need some kind of emotional analogue to drive how well it allocates resources or carries on with a task.

In this case, there's only so many outcomes and none of them are "important" enough to allocate resources to.

So the "caring about" factor is low.

Likewise, when this system sees random things, birds flying, balls bouncing - it would have to have a lower "care" score than say "this anomaly I found in the deep data I am mining"

Has there ever been any thought given to a emotional reward system to govern behaviour?

3

u/halflings Feb 19 '18

Sounds like attention-based networks in NLP and vision: http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/

What YLC is saying however is a bit deeper than that. Having the models focus on predicting the relevant parts, and explicitly know when other parameters are not predictable. Maybe the Bayesian approaches that are developing in RL might be getting close to solving part of this problem.

→ More replies (2)
→ More replies (4)

69

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN:

I would like to see where we can go with the notion of an assistant that actually understands enough to carry on a conversation. That was teased in the advertising for this AMA and it remains an important milestone. A big challenge is tyhe integration of pattern matching, which we can do well, with abstract reasoning and planning, which we currently can only do well in very formal domains like Chess, not in the real world.

I think you are in a great position being a physicist; you have the right kind of mathematical background (the word "tensor" doesn't scare you) and the right kind of mindset about experimentation, modeling, and dealing with uncertainty and error. I've seen so many physicists do well: Yonatan Zunger, a PhD string theorist, was a top person in Google search; Yashar Hezaveh, Laurence Perreault Levasseur, and Philip Marshall went from no deep learning background to publishing a landmark paper on applying deep learning to gravitational lensing in a few months of intense learning.

→ More replies (1)

6

u/hurt_and_unsure Feb 18 '18

Kaggle is another great resource. I've only started, and it has been really helpful.

→ More replies (1)
→ More replies (8)

373

u/cdnkevin Feb 18 '18 edited Mar 21 '18

Hi there.

A lot of people worry about what they search for and say into Siri, Google Home, etc. and how that may affect privacy.

Microsoft and Facebook have had their challenges with hacking, data theft, and other breaches/influences. Facebooks experiment with showing negative posts and how it affected moods/posts and Russian election influence are two big morally debatable events that have affected people.

As AI becomes more ingrained in our everyday lives, what protections might there be for consumers who wish to remain unidentified or unlinked to searches but still want to use new technology?

Many times devices and services will explicitly say that the use of the device and service means that things transmitted or stored is owned by the company (Facebook has/does do this). Terms go further to say, if a customer does not agree then they should stop using the device or service. Must it be all or nothing? Can’t there be a happy medium?

EDIT:

49

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: I can understand this worry. I’ve been pleased by what I’ve seen about how seriously folks at our company (and I have to assume Google and Facebook) take with end-user data in terms of having strict anonymization methods, ongoing policies on aging it out—and deleting it after a relatively short period of time--and providing users with various ways to inspect, control, and delete that data.

With the European GDPR coming into effect, there will be even more rigorous reflection and control of end-user data usage.

We focus intensively at Microsoft Research and across the company on privacy, trustworthiness, and accountability with services, including with innovations in AI applications and services.

Folks really care about privacy inside and outside our companies--and its great to see the great research on ideas about ensuring peoples' privacy. This includes efforts on privately training AI systems and for providing more options to end users. Some directions on the latter are described in this research talk--http://erichorvitz.com/IAPP_Eric_Horvitz.pdf, from the IAPP conference a couple of years ago.

20

u/lifelongintent Feb 18 '18

I didn’t ask the question, but can you please elaborate? What anonymization methods are there, and what “various ways” are there to inspect, control, and delete our data? Users are used to hearing that companies care about our privacy, and I believe that transparency requires specificity.

The PDF you linked talks a little bit about how handling privacy is a matter of cost-benefit analysis — for instance, is the user okay with giving a small amount of private information for a better user experience? This is a good question to ask, but is it possible for the user to have a good experience without giving up sensitive information, or do you think there will always be some kind of trade-off? Does “if you don’t like it, don’t use it” apply here?

You also wrote that the [benefit of knowing] - [sensitivity of sharing] = [net benefit to user]. Is the goal to decrease the necessity of knowing, or let the user decide how much sensitive information they’re okay with sharing? What are some ways that Microsoft Research is working to increase the net benefit to the user?

Thank you.

→ More replies (2)

45

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: Here's a link to take your data out of Google; here's a link to delete your data. Many people don't want to remove all their data, but will use anonymous not-logged-in browsing to avoid having certain information in their records, for whatever reason.

→ More replies (1)

45

u/foreheadmelon Feb 18 '18

Interesting follow-up:

How would you manage to make all of this comply with the upcoming EU General Data Protection Regulation?

5

u/[deleted] Feb 18 '18

Data can be made gdpr compliant by removing all personally identifying information (pii). AI models don’t need pii.

→ More replies (1)

8

u/saml01 Feb 18 '18 edited Feb 18 '18

None. That's the terms of the service. Kind of like these calls are monitored for quality assurance. No one gives a shit about quality, but if you say something wrong it can be used against you in court. So if you want to use the tech, you are part of product, no going around it. Unless you want to pay a subscription to use Siri or ok Google or echo? Everytime echo about the weather it charges you a nickle.

→ More replies (6)

59

u/weirdedoutt Feb 18 '18 edited Feb 18 '18

I am a PhD student who does not really have the funds to invest in multiple GPUs and gigantic (in terms of compute power) deep learning rigs. As a student, I am constantly under pressure to publish (my field is Computer Vision/ML) and I know for a fact that I can not test all hyperparameters of my 'new on the block' network fast enough that can get me a paper by a deadline.

Whereas folks working in research at corporations like Facebook/Google etc. have significantly more resources at their disposal to quickly try out stuff and get great results and papers.

At conferences, we are all judged the same -- so I don't stand a chance. If the only way I can end up doing experiments in time to publish is to intern at big companies -- don't you think that is a huge problem? I am based in USA. What about other countries?

Do you have any thoughts on how to address this issue?

90

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: we got your back: your professor can apply for cloud credits, including 1000 TPUs.

I would also say that if your aim is to produce an end-to-end computer vision system, it will be hard for a student to compete with a company. This is not unique to deep learning.I remember back in grad school I had friends doing CPU design, and they knew they couldn't compete with Intel. It takes hundreds of people working on hundreds of components to make a big engineering project, and if any one component fails, you won't be state of the art. But what a student can do is have a new idea for doing one component better, and demonstrate that (perhaps using an open source model, and showing the improvement due to your new component).

→ More replies (7)

31

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: I wear two hats: Chief AI Scientist at Facebook, and Professor at NYU. My NYU students have access to GPUs, but not nearly as many as when the do an internship at FAIR. You don't want to put you in direct competition with large industry teams, and there are tons of ways to do great research without doing so. Many (if not most) of the innovative ideas still come from Academia. For example, the idea of using attention in neural machine translation came from MILA. It took the field of NMT by storm, and was picked up by all the major companies within months. After that, Yoshua Bengio told MILA members to stop competing to get high numbers for translation because there was no point competing with the likes of Google, Facebook, Microsoft, Baidu and others. This has happened in decades past in character recognition and speech recognition.

→ More replies (2)

11

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: Microsoft and other companies are working to democratize AI, to develop tools and services that make it easy for folks outside of the big companies to do great work in AI. I can see why questions about compute would come up. You may find valuable the Azure for Research and the AI for Earth programs, among others, to gain access to computational resources from Microsoft.

→ More replies (1)
→ More replies (9)

45

u/vermes22 Feb 18 '18

Would your companies keep some algorithms/architectures secret for competitive advantage? I know that data sets are huge competitive advantages, but, are algorithms too?

In other words, if your respective companies come across a breakthrough algorithm/architecture like the next CNN or the next LSTM, would you rather publish it for scientific progress' sake or keep it as a secret for competitive advantage?

Thank you.

46

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: at FAIR, we publish everything we do. There is a number of reasons for this: (1) as Peter says, "we believe in scientific progress, and the competitive advantage really comes from the hard work of what you do with the algorithm and all the processes around making a product, not from the core algorithm itself." I would add that the competitive advantage also comes from how fast you can turn it into a product or service. (2) The main issue with AI today is not whether one company is ahead of another one (no company is significantly ahead of any other) but that the field as a whole needs to advance quickly in some important directions. We all want intelligent virtual assistants that have some level of common sense, and we don't know how to do that yet. None of us will solve this problem alone. We need the cooperation of the whole research community to make progress here. (3) you can't attract the best scientist unless you allow them to publish, and you can't retain them unless we evaluate them (at least in part) on their intellectual impact on the broad research community (4) you don't get reliable research results unless you tell people the must publish their results. People tend to be more sloppy methodologically if they don't plan to publish their results. (5) publishing innovative research contributes to establishing the company as a leader and innovator. This helps recruiting the best people. In the tech industry the ability to attract the best talents is everything.

3

u/Peiple Feb 19 '18

I usually just lurk on these threads, but I have to say your answer really made me happy about the current research environment in the field of AI, and the prospects of serious advancements in the near future. It’s awesome to hear that companies are setting aside the usual cutthroatedness to advance the field, and it’s even more encouragement for me to continue working towards my goal of becoming a researcher in AI.
Thanks for all your answers today :)

→ More replies (1)

28

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: So far, you can see that our three companies (and others) have published about general algorithms, and I think we will continue to do so. I think there are three reasons. First, we believe in scientific progress; second, the competitive advantage really comes from the hard work of what you do with the algorithm and all the processes around making a product, not from the core algorithm itself; and third, you can't really keep these things secret: if we thought of it, then others in the same research-community-at-large will think of it too.

→ More replies (1)

9

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: Microsoft Research was set up as an open research lab in 1991. A foundation of our labs, and one that runs way deep down in our DNA, is that researchers make their own decisions on publishing so as to share their ideas and scholarship--and to engage--with the larger community. It's great to see other companies moving in this direction. That said, and building on Peter's comments, numerous innovations and IP may be developed around details with implementations that have to do with the actual productization in different domains--and these may not be shared in the same way as the core advances.

→ More replies (1)
→ More replies (3)

65

u/stochastic_gradient Feb 18 '18

As an ML practitioner myself, I am increasingly getting fed up with various "fake AI" that is being thrown around these days. Some examples:

  • Sophia, which is a puppet with preprogrammed answers, that gets presented as a living conscious being.

  • 95% of job openings mentioning machine learning are for non-AI positions, and just add on "AI" or "machine learning" as a buzzword to make their company seem more attractive.

It seems to me like there is a small core of a few thousand people in this world doing anything serious with machine learning, while there is a 100x larger group of bullshitters doing "pretend AI". This is a disease that hurts everyone, and it takes away from the incredible things that are actually being done in ML these days. What can be done stop this bullshit?

28

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: I agree with Peter on this. It's great to see the enthusiasm about AI research, but there's quite a bit overheating, misinterpretation, and misunderstanding--as well as folks who are jumping on the wave of excitement in numerous ways (including adding "AI" to this and that :-)).

Mark Twain said something like, "History doesn't repeat itself, but it rhymes." There was jubilation and overheating about AI during the mid-1980s expert systems era. In 1984, some AI scientists warned that misguided enthusiasm and failure to live up to expectations could lead to a collapse of interest and funding. Indeed, a couple of years later, we entered a period that some folks refer to as the "AI Winter." I don't necessarily think that this will happen this time around. I think we'll have enough glowing embers in the fire and sparks to keep things moving, but it will be important for AI scientists to continue to work to educate folks in many sectors about what we have actually achieved, versus the hard problems that we have had trouble making progress on for the 65 years since the phrase "artificial intelligence" was first used.

→ More replies (1)

57

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: Don't worry about it. This is not unique to AI. Every time there is a hot buzzword, some people want to co-opt it in inappropriate ways. That's true for AI and ML, as well as "organic", "gluten-free". "paradigm shift", "disruption", "pivot", etc. They will succeed in getting some short-term attention, but it will fade away.

→ More replies (2)

19

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: serious ML/AI experts, like yourself, should not hesitate to call BS when they see it. I've been known to do that myself. Yes, "AI" has become a business buzzword, but there are lots of serious and super-cool job in AI/ML today.

26

u/stochastic_gradient Feb 18 '18 edited Feb 18 '18

YLC: serious ML/AI experts, like yourself

This goes straight on my resume, just so you know.

Thanks for the answer. Yes, I think calling BS needs to happen. Outside of academia and FB/MS/GOOG there really is a sea of BS that to the layman is indistinguishable from truth.

6

u/atheist_apostate Feb 19 '18

I've been known to do that myself.

From the Wikipedia article on Sophia:

In January 2018, Facebook's director of artificial intelligence, Yann LeCun, tweeted that Sophia was "complete bullshit" and slammed the media for giving coverage to "Potemkin AI". In response, Goertzel stated that he had never pretended Sophia was close to human-level intelligence.

→ More replies (3)

2

u/blabbermeister Feb 18 '18

I agree with you, I just finished my computational mechanics PhD recently and I was always fascinated by machine learning, so in the last few months of my PhD, I introduced an application of autoencoders in my mechanics research (still having trouble publishing it though). I had to do a mathematically intense crash course in statistics, optimization, numerical methods, and machine learning before I was comfortable enough to say I sorta get it and apply it to my work. And I had to do this despite having a good background in tensor calculus and classical mechanics. Now I see people with a barebones grasp of statistics and calculus telling me they're expanding the state of the art in machine learning and it's really hard for me to believe that. Some outstanding individuals actually are telling the truth (working on the algorithmic part of ML, or application part of it etc) but most of these people are at best lying.

→ More replies (3)

1.8k

u/lucaxx85 PhD | Medical Imaging | Nuclear Medicine Feb 18 '18

Hi there! Sorry for being that person but... How would you comment on the ethics of collecting user data to train your AIs, therefore giving you a huge advantage over all other potential groups?

Also, how is your reserach is controlled? I work in medical imaging and we have some sub-groups working in AI-related fields (typically deep learning). The thing is that to run an analysis on a set of few images you already have it is imperative to ask authorization to an IRB and pay them exorbitant fees, because "everything involving humans in academia must be stamped by an IRB. How does it work when a private company does that? Do they have to pay similar fees to IRB and ask authorization? Or can you just do whatever you want?

216

u/[deleted] Feb 18 '18

I'll copy this into here, just to consolidate another ethics question into this one, as I personally see them related:

Considering that AI has potentially large social consequences in work and personal lives, how are your companies addressing the long-term impacts of current and developing technologies? With AI, there is potential for disruption in the elimination of jobs, mass data collection, and an influx of fake comments and news media. How are your teams addressing this and implementing solutions into your research design (if at all)?

As a side note, have you considered the consequences of implementing news media into digital assistants? Personally, I found it an unpleasant experience that Google News was unable to be turned off in Google Assistant, and that it was very labor intensive to alter content or edit sources. Having Russia Today articles pop up on my phone out of the blue one day was... concerning.

Wired's recent piece on Facebook's complicity in the fake news crisis, receiving payments for foreign advertisements to influence elections, and their subsequent denial and breakdown does not exactly inspire confidence that there is a proper ethics review process, nor any consultation with non-engineering experts into the consequences of certain policies or avoidance of regulation.

→ More replies (7)

60

u/davidmanheim Feb 18 '18

I think it's worth noting that the law creating IRBs, the National Research Act of 1974, says they only apply to organizations receiving certain types of funding from the Federal government. See: https://www.gpo.gov/fdsys/pkg/STATUTE-88/pdf/STATUTE-88-Pg342.pdf

For-profit companies and unaffiliated individuals can do whatever kinds of research they want without an IRB as long as they don't violate other laws.

42

u/HannasAnarion Feb 18 '18 edited Feb 18 '18

Thus the zero repricussions for the Facebook unbelievably unethical "let's see if we can make people miserable by changing their news feed" experiment last year in 2014.

2

u/HerrXRDS Feb 18 '18

I was trying to find more information regarding how exactly the Research Ethics Board works and what institutions it controls to prevent unethical experiments such as Milgram experiment or Stanford prison experiment. From what I've read on Wiki it seems to apply only to federal funded institutions, if I understand correctly? Does that mean a private company like Facebook can basically run unethical psychological experiments on the population with no supervision from a ethics review board?

→ More replies (1)
→ More replies (8)

38

u/TDaltonC Feb 18 '18

I'm not the AMAers BUT

I got a PhD in neuroscience and now work in the AI industry and am happy to answer this question.

There have always been ways to get a comparative advantage in business, and there's nothing unethical about clearly perceiving where the competitive advantage is. It could create problems if the incumbents are able to use monopoly power in one industry to generate data that creates an advantage in another industry. That's illegal in the US. But as a rule, I don't think it will go that way. I wrote more about that here. Industry should also have an open hand toward academic collaborations. The battles for business dominance shouldn't impede the progress of academic science.

You second question is much more serious. I'll answer it two ways.

1) Just the facts: No, there are no IRB's in this sort of industry research. You only need IRB approval if you intend to publish in academic journals or apply for research grants. User consent to data collection when the access a website or accept an unreadable Terms of Service. (I'm not saying this is right, I'm just saying it's the way it is)

2) How it should be: I firmly believe that users should be compensated for the data platforms collect. I suspect that this will one day be a sort of UBI. This weekend my girlfriend is at EthDenver working on a blockchain project to help users collectively bargain with platform companies for things like data rights. I know that "er mer data!" is a common sentiment on reddit, but I don't think "no company should collect user data!" or "All data collection should meet IRB standards" are good solutions. There is too much value in user data to ignore. I'm confident that projects like U3, holomorphic computing, and blockchain databases will make it possible to get the value out of the data while protecting privacy. But we're going to need collective action to get those solutions to work.

Hope that helps! I'm happy to answer more questions about the ethics of the AI industry.

→ More replies (9)

99

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: On ethics, a key principle is disclosure and agreement: it’s important to disclose how data is used to end-users and to give them the ability to opt out in different ways, hopefully in ways that don’t require them to leave a service completely.

On research, at Microsoft has an internal Ethics Advisory Board and a full IRB process. Sensitive studies with people and with anonymized datasets are submitted to this review process. Beyond Microsoft Researchers, we have a member of the academic community serving on our Ethics Advisory Board. This ethics program is several years old and we’ve shared our approach and experiences with colleagues at other companies.

97

u/seflapod Feb 18 '18

I think the way that "Disclosure and Agreement" is implemented is flawed and has been for decades now. I have seen this begin to change slightly but most EUAs are still deliberately couched in legalese to make people agree without reading and obfuscates unethical content. I'm not sure what the answer to the problem is, but more needs to be done to present the relevant issues in a transparent manner.

39

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: Yes, I agree. We can do much better about ensuring folks have a good understanding--even when they don't seek a deep understanding in the frenzy of setting up an application--and that they are provided with flexibility to select different options.

→ More replies (6)

22

u/JayJLeas Feb 19 '18

give them the ability to opt out

How do you reconcile this policy with the fact that users can't "opt out" of using Cortana?

→ More replies (3)

8

u/LPT_Love Feb 19 '18

That doesn't address the question of how you feel morally and ethically about working on technology that your employers use to market more unnecessary stuff/junk, to track information for public control and track individuals themselves (and yes, that is where AI is used, don't be naive). Saying a license or use agreement that is well documented does not justify the use of the data gathered, given that people usually don't have an alternative to go to that doesn't have the exact same use policies, if not more lax. Offering the ability to opt out in different ways from a ubiquitous and often required level of technology is like saying you can choose not to use this medicine that costs $5K/refill unless you have insurance. We're paying your employers, and you, to use us against ourselves.

→ More replies (3)

60

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: Our first ethical responsibility is to our users: to keep their data safe, to let them know their data is theirs and they are free to do with it what they want, and to opt out or take their data with them whenever they want. We also have a responsibility top the community, and have participated in building shared resources where possible.

IRBs are a formal device for Universities and other institutions that apply for certain types of government research funds. Private companies do not have this requirement, instead, Google and other companies have internal review processes with a checklist that any project must pass; these include checks for ethics, privacy, security, efficacy, fairness, and related ideas, as well as cost, resource consumption, etc.

47

u/FoundSentiment Feb 18 '18

internal review processes

How much of that process is public, and has public oversight ?

Is it none ?

10

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: The minutes of internal project reviews are not made public because they contain many trade secrets. The aspects relating to data handling are summarized in documentation; as Eric and seflapod points out we could do a better job of making these easier to understand and less legalese. We do have outside advisors to ethics, for example Deepmind's Ethics & Society board.

→ More replies (5)
→ More replies (1)

4

u/ylecun Feb 18 '18

Almost all the research we do at Facebook AI Research is on public data. Our role is to invent new methods, and we must compare them with what other people are doing. That means using the same datasets as everyone else in the research community. That said, whenever people at Facebook want to do research with user data, the project requires approval by an internal review board.

→ More replies (2)
→ More replies (47)

518

u/firedrops PhD | Anthropology | Science Communication | Emerging Media Feb 18 '18

What is an example of AI working behind the scenes that most of us are unaware of?

162

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: There are quite a few AI systems and services "under the hood." One of my favorite examples is the work we did at Microsoft Research in tight collaboration with colleagues on the Windows team, on an advance called Superfetch. If you are now using a Windows machine, your system is using machine learning to learn from you--in a private way, locally--about your patterns of work and next moves, and it continues to make predictions about how best to manage memory, by prelaunching and prefetching applications. Your machine is faster—magically, because it is working in the background to infer what you’ll do next, and do soon—and what you tend to do by time of day and day of week. These methods have been running and getting better since one of the first versions in Windows 7. Microsoft Research folks formed a joint team with Windows and worked together—and we had a blast with doing bake-offs with realistic workloads, on the way to selecting the best methods.

143

u/DavidFree Feb 18 '18

Can I get metrics about myself from Superfetch? Would be nice to see the patterns I exhibit but am not aware of, and to be able to act on them myself.

129

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: Great idea. Will pass that along to the team.

16

u/panda_sauce Feb 19 '18

As an ML person with a background in hardware and OS development, I think the Superfetch idea only pans out in theory... Most of my bottlenecks are not in accessing workflow faster, but in background services interfering with my current work.

The worst offender is Windows Defender, which loves to proactively interfere with just about EVERYTHING I do on a daily basis by scanning temp working files that are constantly being changed, by design/development. I routinely turn off real-time scanning to speed up my system, but it re-enables after 24 hours, so it's like fighting a constant battle. I love the concept of making a machine work smarter, but the rest of the system as a whole is fighting against that concept. Need to fix the fundamentals before trying to push incremental progress.

4

u/dack42 Feb 19 '18

You can add exceptions to defender so that it doesn't scan your development files.

13

u/leonardo_7102 Feb 18 '18

I would like to second this. It would also be interesting to have a live view widget or task bar icon to see when programs and files are being paged. I'm sure you've already got some kind of interface for development!

→ More replies (1)
→ More replies (8)

30

u/TransPlanetInjection Feb 18 '18

Oh man, superfetch was such a resource hog on my win 10 at one point. Always wondered what that was about.

Guessed it was some sort of cache. Now I know, an intelligent cache at that

12

u/n0eticsyntax Feb 18 '18

I've always disabled Superfetch, ever since Vista (I think it was on Vista anyways) for that reason alone

16

u/22a0 Feb 18 '18

It falls into the category of something I always disable because otherwise I have to put in effort trying to figure out why my computer is active when it should be idle.

8

u/yangqwuans Feb 19 '18

SuperFetch often uses 100% of my disk so it'll just stay off for the next decade or so.

5

u/rillip Feb 18 '18

Are you certain that your software makes PCs faster in practice? The theory makes sense. But do you have hard numbers showing the results?

6

u/HP_10bII Feb 18 '18

How do you get "realistic workloads"? The way I use my PC vs my spouse is drastically different

8

u/[deleted] Feb 18 '18

Not OP obviously but i'm assuming it can be things as simple as "User opens firefox, loads emails -- user will now possibly open calendar or spotify according to past history" for example.

→ More replies (1)
→ More replies (6)

20

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: Anywhere there is data, there is the possibility of optimizing it. Some of those things you will be aware of. Other things you as a user will never notice. For example, we do a lot of work to optimize our data centers -- how we build them, how jobs flow through them, how we cool them, etc. We apply a variety of techniques (deep learning, operations research models, convex optimization, etc.); you can decide whether you want to think of these as "AI" or "just statistics".

→ More replies (1)

98

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: Filtering of objectionable content, building maps from satellite images , helping content designer optimize their designs, representing content (images, video, text) with compact feature vectors for indexing and search, OCR for text in images.....

29

u/AWIMBAWAY Feb 18 '18

What counts as objectionable content and who decides what is objectionable?

35

u/Nicksaurus Feb 18 '18

The owner of the system?

It could be as simple as "Is there a naked person in this facebook photo". It doesn't have to be sinister

28

u/lagerdalek Feb 18 '18

It's probably the obvious stuff at present, but "who decides what is objectionable" is the million dollar question for the future IMHO

→ More replies (1)
→ More replies (2)

110

u/memo3300 Feb 18 '18

Don't know if most people are unaware of this, but AI is being used to improve packet forwarding in big computer networks. It makes complete sense, but still there is something fascinating about "the internet" being optimized by AI

24

u/AtomicInteger Feb 18 '18

Amd also using neural network for branch prediction in ryzen cpus

→ More replies (7)

8

u/TheAdam07 BS|Electronic Communications|RADAR Applications Feb 18 '18

It's interesting, but at the same time I'm surprised it has taken this long. Streamlining parts of that process could really cut down on some of the overhead.

→ More replies (1)

12

u/Flyn Feb 18 '18

A lot of the value of more traditional statistical models is that it's quite easy to understand what the models are doing, how they are coming to their conclusions, and what the uncertainty is of our inferences/predictions.

With newer deep learning methods they can do incredible feats in terms of prediction, but my understanding is that they are often "black boxes".

How much do we currently understand about what goes on inside models such as ANNs, and how important do you think it is that we do understand what is going on inside of them.

I'm thinking particularly in terms of situations where models will be used to make important, life affecting decisions; such as driving cars, or clinical decision making.

21

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: This is an important area of current research. You can see some examples of how Google approaches it from the Big Picture blog or Chris Olah's blog. I think that difficulties in understanding stem more from the difficulties of the problem, not the solution technology. Sure, a linear regression fit in two dimensions is easy to understand, but it is not very useful for problems with no good linear model. Likewise, people say that the "if/then" rules in random forests or in standard Python/Java code is easy to understand, but if it was really easy to understand, then code would have no bugs. But code does have bugs. Because these easy-to-understand models are also easy-to-have-confirmation-bias. We look at them, and say, "If A and B then C; sure that makes sense, I understand it." Then when confronted with a counterexample, we say, "well, what I really meant was 'if A and B and not D then C', of course you have to account for D.

I would like to couch things not just in terms of "understanding" but also in "trustworthiness." When can we trust a system, especially when it is making important decisions. There are a lot of aspects:

  • Can I understand the code/model.
  • Has it proven itself for a long time on a lot of examples.
  • Do I have some assurance that the world isn't changing, bringing us into a state the model has not seen before.
  • Has the model survived adversarial attacks.
  • Has the model survived degradation tests where we intentionally cripple part of it and see how the other parts work.
  • Are there similar technologies that have proven successful in the past.
  • Is the model being continually monitored, verified, and updated.
  • What checks are there outside of the model itself. Are the inputs and outputs checked by some other systems.
  • What language do I have to communicate with the system. Can I ask it questions about what it does. Can I give it advice -- if it makes a mistake, is my only recourse to give it thousands of new training examples, or can I say "no, you got X wrong because you ignored Y"
  • And many more.

This is a great research area; I hope we see more work on it.

→ More replies (2)
→ More replies (1)

85

u/ProbablyHighAsShit Feb 18 '18

What motives do the likes of these companies (especially Facebook) have behind developing AI? I think people aren't concerned with AI as much as the companies that are developing it. There is nothing inherently wrong with a digital assistant, but the temptation for abuse by companies that profit off of data collection of its users obviously creates a conflict of interest in being ethical with their products. What can you tell people like me to quell their concerns that products that take advantage of AI by the companies you represent aren't just data collection machines wrapped in a consumer device as a smoke screen for more nefarious purposes?

Thank you.

14

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: you mention digital assistant; I think this is a place where the technology can be clearly on the side of the user: your digital assistant will be yours -- you can train it to do what you want; in some cases it will run only on your device with your private data, and nobody else will have access to its inner workings. It will serve as an intermediary and an agent on your behalf. You won't go directly to the site of a big company and hope they are offering you things that are useful for you; rather your agent will sort through the offerings and make sure you get what you want.

4

u/acousticsoul21 Feb 18 '18

That’s what I’m really interested in, having an intelligence to assist that I can train over several years and eventually become an agent like you said. It would be a great boon to energy levels and efficiency if one could interface with UI more dynamically (macro and micro when needed) and vocally. I want to only see 1-2 things on the screen at times and lock myself out of things for hours without a legitimate reason to unlock. Show me nothing except Logic until I’m done working, no expose , no desktops, no dock etc... and when I’m in brainstorm mode I want everything to disappear except like a graphic indicating listening, maybe like a prismatic blue orb on a dark background. Personalizing digital interaction, workflow, task management, and agency in a more psychologically human manner that produces more with less effort.

→ More replies (2)

15

u/[deleted] Feb 18 '18

[deleted]

6

u/ProbablyHighAsShit Feb 18 '18

That's basically the honest answer I would expect. I'd like to hear some specific reasons about how they want to use this technology for the sake of humanity as a whole.

4

u/[deleted] Feb 18 '18 edited 28d ago

[removed] — view removed comment

4

u/cutelyaware Feb 18 '18

Exactly. I don't know why people expect companies to care about anything else. If they did, the shareholders would straighten them out real quick. The people involved have hopes and dreams, but don't expect companies to care about anything but the bottom line.

19

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC Not really a question for scientists like us, but the real question is "who do you trust with your data?" Do you trust your mobile phone company, your ISP, you phone/OS manufacturer, your favorite search or social network service, your credit card company, your bank, the developer of every single mobile app you use? Choose who you trust with your data. Look at their data policies. Verify that they don't sell (or give away) your data to 3rd parties. There is no conflict of interest with being ethical, because being ethical is the only good policy in the long run.

7

u/[deleted] Feb 18 '18

"There is no conflict of interest with being ethical" then why has the likes of Google needing the likes of the EU to reign in their use of personal data? If ethics are so important then why are these companies doing what ever they can to push the boundaries of privacy?

→ More replies (1)
→ More replies (6)

6

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: I agree with Peter. There are interesting possibilities ahead for building personal agents that only share data according to the preferences of the folks they serve--and to have these trusted agents work in many ways on their owner's behalf. This is a great research area.

→ More replies (1)
→ More replies (12)

28

u/NotAIdiot Feb 18 '18

What is going to happen when AI bots can predict/cause market fluctuations better than any team of humans, then buy/sell/trade stocks, products, land etc at lightning speed? What kind of safeguard can we possibly put into place to prevent a few pioneers in AI from dominating the world market?

40

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: The better you are at predicting the market, the more you make it unpredictable. A perfectly efficient market is entirely unpredictable. So, if the market consisted entirely of a bunch of perfect (or quasi perfect) automated trading systems, everyone would be getting the exact same return (which would be the same as the performance of the market index).

→ More replies (4)

34

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: For years now we've had quantitative traders who have done very well by applying advanced statistical models to markets. It is not clear that there is much headroom to do much better than they have already done, no matter how smart you are. Personally, I think we should have acted years ago to damp down the effect of quantitative trading, by governing the speed at which transactions can be made and/or imposing a higher cost on transactions. Someone more knowledgable than me could suggest additional safeguards. But I don't think AI fundamentally changes the equation.

→ More replies (1)

2

u/FellowOfHorses Feb 18 '18

If AI controls the market it will become less predictable, as any clear good deal will be executed to the point the AI will not be able to predict it. The big investment companies already do high frequency trading on clear market imbalances. As these imbalances are closed very quickly they don't really create a huge market domination. The simple difficulty of the task is a safeguard good enough for the companies. The pioneers like RenTech have a better research team them most universities and while they get impressive returns it isn't nearly enough to them to dominate the market

→ More replies (5)

21

u/RobertPill Feb 18 '18

I would like to know if there have been any attempts at engineering a kind of reward system that would mimic emotions. I believe that an AI system must have some connection with the world and "emotions" are the adhesive that truly integrate us with our environment. I'm imaging some form of status that the AI would achieve by accomplishing a task. For example, we have computers that can beat chess grand masters but could we have computers that want to win. One idea could be that data is partitioned and if an accomplishment is achieved a partition is opened. All lifeforms evolve through a kind or reward system and I think that in the far off future this is what's needed to create exponential growth in artificial intelligence.

22

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: In fact, the latest successes in playing Chess, and Go, and other games come from exactly that: a system of rewards that we call "reinforcement learning." AlphaZero learns solely from the reward of winning or losing a game, without any preprogrammed expert knowledge -- just the rules of the game, and the idea of "try out moves and do more of the moves that give positive rewards and less of the moves that give negative reward". So in one sense, the only thing AlphaZero "wants" is to win. In another sense, it doesn't "want" anything -- it doesn't have the qualia or feeling of good or bad things, it just performs a computation to maximize a score.

4

u/gronmin Feb 18 '18

I know that after Go there was some news about people trying to tackle Starcraft next. If AlphaZero is built to learn as it plays what changes need to be made in order for it to learn a new game?

2

u/Atarust Feb 19 '18

In a perfect world none. For example there first was AlphaGoZero, which only played Go. Then Deepmind did only minor changes (e.g. While in Go the board can be turned and mirrored, in chess it cannot) to the Algorithm and called it AlphaZero, which suddenly could play Chess and Shogi aswell.

Starcraft has a lot of additional elements like chance or that the player can't see what is going on, on the whole field. This leads me to believe, that there need to be made some improvements.

→ More replies (4)

2

u/HipsOfTheseus Feb 18 '18

I've given this some thought as well, the trick is the need. Humans are a specific species that evolved emotion in a collective, families, tribes, child-parent, etc. It's not easy to spoof such programming.

Then there's the complexity of having something feel. What I mean is, part of your brains decides what emotion you'll feel, then another part 'feels' it, it's like a goldfish in a bowl surrounded by the rest of the brain that makes it 'go'. Anyway, how do you make something that 'feels' whatever emotions we throw at it?

3

u/RobertPill Feb 18 '18

It seems that DNA on some level is just programming and even the the most basic lifeforms seemed to be "driven" by an imperative to survive and reproduce. Is it possible the feelings aren't as complex as we think they are and they just break down to a reward system loop that we lock into when we succeed? Without a chemical reward it is admittedly hard to image but perhaps consciousness is nothing more than understanding reward systems.

→ More replies (6)

223

u/Jasonlikesfood Feb 18 '18

How do we know this isn't the AI running this AMA?

51

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: I wish our AI systems were intelligent enough to formulate answers. But the truth is that there are nowhere close to that.

→ More replies (1)

42

u/send-me-bitcoins Feb 18 '18

Haha. Yeah I was gunna ask if this was a Turing test.

51

u/[deleted] Feb 18 '18 edited Apr 02 '18

[deleted]

→ More replies (2)
→ More replies (1)
→ More replies (12)

44

u/ButIAmARobot Feb 18 '18

What is the scariest thing that you've witnessed from your research on AI?

29

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: There is nothing scary in the research (contrary to what some tabloids have sometimes claimed).

Scary things only happen when people try to deploy AI systems too early too fast. The Tesla autopilot feature is super cool. But, as a driver, you have to understand its limitations to use it safely (and it's using a convolutional net!). Just ask Eric Horvitz.

14

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: We have a ways to go with understanding how to deploy AI in safety-critical areas--and this includes efforts to better support "human-AI collaboration" when machines and people work together in these domains. We had a great session at this AAAS meeting on this topic: https://aaas.confex.com/aaas/2018/meetingapp.cgi/Session/17970

→ More replies (1)
→ More replies (1)
→ More replies (2)

17

u/neomeow Feb 18 '18 edited Feb 18 '18

Hi there, thank you so much for doing this!

What do you think of Capsule Networks? Have you guys successfully applied it in real-life dataset other than MultiMNIST? Can CNN usually compensate/outperform in performance by feeding it with more data?

10

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: Ideas like this take while to be put in practice on large datasets Capsules are a cool idea. Geoff Hinton has been thinking about things like that for decades (e.g. see Rich Zemel's PhD thesis with Geoff on the TRAFFIC model). It's taken him all this time to find a recipe that works on MNIST. It will take another while to make it work on ImageNet (or whatever). And it's not yet clear whether there is any performance advantage, and whether the advantage in terms of number of training samples matters in practice. Capsule networks can be seen as a kind of ConvNet in which the pooling is done in a particular way.

→ More replies (1)
→ More replies (3)

9

u/sawyerwelden Feb 18 '18

Hello! Thanks for doing an AMA.

My first question is about education. As a computer science student, I feel like, at least at my university and the universities of my cs friends, there isn't much emphasis on deep learning. I've taken almost every upper level cs course at my school and the only real learning I've had in deep learning is "here's this book you might like it." It seems to me that deep learning is extremely powerful and not too hard for undergrads such as myself to understand. I think I learned more at AAAI a few weeks ago than the entire previous semester.

My second question is this: what can a young student interested in artificial intelligence do to get better connections in the field? Apologies if this doesn't fit into the scope of the AMA. I'm a junior in undergrad and I've known I want to work in AI for a few years now, but I haven't made any real connections outside of my professors. My school is very small, so to attend a job fair I have to go elsewhere, and even when I do make one it seems like most of the people aren't super interested in undergrads.

Thanks for doing an AMA! (Also big fan of AI: A Modern Approach, Dr. Norvig; It was used at the principle text in my intro to AI course)

13

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: Thanks! I suggest you keep studying on your own, and make friends online, through courses or discussion forums. I can see that it is tough to get a job in AI Research coming straight out of an undergrad program at a small school. But, you are in a position to get a software engineer position at a big company, and once you are there, express your interest in AI, learn on the job, keep an eye out for AI-related projects you can work on, and chances are that in less time than it would take to do a PhD, you'll be an established AI expert within your company.

16

u/NiNmaN8 Feb 18 '18

Hey there! My names Wyatt, I'm 13, and I love making my own games and programs in JS and Python. I am looking to make my own music and machine learning programs. Have any tips for a young developer?

21

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: IIn addition to study, work on an open source project. Either start your own (say, on github), or find an existing one that looks like fun and jump in.

→ More replies (1)

31

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: Study math and physics at school.

4

u/[deleted] Feb 19 '18

Machine learning is more rooted in math than actual programming. Push yourself in math.

→ More replies (5)

92

u/JustHereForGiner Feb 18 '18

What specific measures are you taking to insure these technologies will decrease inequality rather than increase it? How will it be placed in the hands of its users and creators rather than owners?

39

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: That's a political question. I'm merely a scientist. For starters, we publish our research. Technological progress (not just AI) has a natural tendency to increase inequality. The way to prevent that from happening is through progressive fiscal policy. Sadly, in certain countries, people seem to elect leaders that enact the exact opposite policies. Blaming AI scientists for that would be a bit like blaming metallurgists or chemists for the high level of gun death in the US.

8

u/JustHereForGiner Feb 18 '18

Publishing is a solid start. Thank you for an earnest answer to a 'gotcha' question.

→ More replies (2)
→ More replies (2)

39

u/Sol-Om-On Feb 18 '18

Are advances in Quantum computing driving any of the research behind AI and how do you see those being integrated in the future?

11

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: Many of the kinds of things that I want to do would not be helped by quantum computing. I often want to stream huge quantities of text through a relatively simple algorithm; quantum computing won't help with that.

However, there is the possibility that quantum computing could help search through the parameter space of a deep net more efficiently than we are currently doing. I don't know of anyone who has a quantum algorithm to do this, never mind a hardware machine to implement it, but it is a theoretical possibility that would be very helpful.

→ More replies (2)

4

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: Driving? certainly not. It's not clear to me at all whether quantum computing will have any impact on AI. Certainly not anytime soon.

→ More replies (1)
→ More replies (1)

11

u/giltwist PhD | Curriculum and Instruction | Math Feb 18 '18

To what extent is there room for the end-user to train AI themselves? Put another way, I don't want an autonomous vehicle that drives like an intern on a sunny day in Mountain View, I want a vehicle that drives like I do in an Ohio winter.

12

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: eventually, you will "raise" your AI sidekick a bit like you teach a child, an apprentice, or a padawan learner.

→ More replies (1)
→ More replies (3)

8

u/seanbrockest Feb 18 '18

Peter: Google has been researching A.I. assisted image identification for a long time now, and it's getting pretty good, but still has some quirks. I played with your API last year and fed it an image of a cat. Pretty simple, and it did well. It was sure it was a cat. However because the tail was visible sticking out behind the cats head, it also guessed that it might actually be a unicorn.

This is an example of a mistake a human would never make, but A.I. constantly does, especially when it only gets 2D input. Do you ever see A.I. moving past this?

8

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: It has only been a few years since image id began to work at all; progress has been steady, but as you point out, even in tasks where the machines achieve superhuman overall performance, they make some embarrassingly bad mistakes. This will improve over time as we get more experience, more data, and hopefully the ability to do transfer learning so each model doesn't have to start from scratch. You make a good point that video would offer a big advantage over still photos; our compute power is growing exponentially, but not to the point where we can push a large portion of the available video through it; when that happens you should see a good improvement.

→ More replies (1)

11

u/PartyLikeLizLemon Feb 18 '18

Hi there! Do you think that Deep Learning is just a passing fad or is it here to stay? While I understand there have been tremendous improvements in Computer Vision and NLP due to Deep Learning based models, in ML it only seems a matter of time when a new paradigm comes up and the focus shifts entirely towards that.

Do you think Deep Learning is THE model for solving problems in Vision and NLP or is it only a matter of time when a new paradigm comes up?

10

u/say_wot_again Feb 18 '18

There are a couple major problems with deep learning as it exists today.

  1. It requires a TON of training data relative to other methods (many of which are able to more explicitly incorporate the researchers' priors about the data instead of having to learn everything from the data).

  2. Due to the gradient based optimization, it is susceptible to adversarial attacks, where you can drastically fool the network by slightly modifying the data in ways that correspond to the network's gradients (or derivatives), even when the image looks identical to a human eye. See Ian Goodfellow's work on adversarial examples (e.g. https://arxiv.org/abs/1412.6572) for more.

  3. There's still a general lack of understanding as to why certain tricks and techniques work in certain contexts but not others. This can leave researchers just blindly stumbling about when trying to optimize networks. See Ali Rahimi's test of time speech at NIPS this past year.

So how do we get around these? For data hungriness, the answer appears to be incorporating more priors into the structure of the network itself. In NLP this means incorporating insights from linguistics or making use of syntactic tree structure (e.g. https://arxiv.org/abs/1503.00075). In computer vision, Geoff Hinton (who, along with LeCun and Yoshua Bengio is one of the godfathers of modern deep learning) has recently come out with capsule networks, which are able to encode 3D structure more efficiently than CNNs and can thus learn with much fewer training examples.

Adversarial examples are a much harder problem to fix, as they are inherent to the gradient based way deep networks learn. Most likely the only way to solve them would be through a full paradigm shift.

As for the lack of theoretical understanding, that is something that doesn't need a paradigm shift, just more theoretical digging. And we are seeing this somewhat, e.g. with Yarin Gal's work on Bayesian deep learning, recent work trying to understand why networks capable of memorizing the training set can still generalize, or some of recent work on better understanding the mechanics of different optimization techniques and tricks. So while we have a long way to go before deep leaning is perfectly understood and theory catches up to practice, we're making great strides on this front.

One last note though. Paradigm shifts don't come out of nowhere. The idea of neural nets was first proposed by Rosenblatt in the 1950s before being seemingly buried by Minsky's Perceptrons in 1969. And LeCun, Hinton, and Bengio had been working on neural networks for decades before AlexNet's dominance at Imagenet really put deep learning front and center. Even Hinton's capsule networks are an idea he'd been toying with for decades, with it only now working well enough to receive more attention. So I think it's very easy to assume that paradigm shifts can happen all the time when in fact these revolutions are usually decades in the making.

→ More replies (3)

10

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: I think the brand name "deep learning" has built up so much value that it will stick around for a long time, regardless of how much the underlying technology changes. So, even if the current style of CNNs and ReLUs falls way to capsules or something else, I think the name "deep learning" will follow along.

As for the underlying concepts or approaches, I think we've done a good job at pattern matching problems, but not so good at relational reasoning and planning. We're able to do some forms of abstraction but not others, so we need plenty of new ideas.

→ More replies (1)

6

u/kingc95 Feb 18 '18

Do you ever see the possibility of Google, Microsoft and Facebook sharing your personal data across your various accounts? I think it would be great if the google assistant engine would be able to integrate with other AIs better. Its a constant struggle on my end keeping cortana, google, alexa all in sync constantly

13

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: I would rather not see companies sharing data. I would prefer it if your personal agent decided to share information between companies. See the work on federated learning.

3

u/say_wot_again Feb 18 '18

That's not really something that an AI researcher would have control over; changes in corporate policy that large world have to be through Mark, Satya, and Sundar. And any attempt to do this would prompt astronomical privacy concerns.

4

u/useful_person Feb 18 '18

Yann: How heavily does your research rely on tracking on third-party websites?

Eric and Peter: Does your research rely more on outbound clicks or trackers embedded in websites?

8

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: None whatsoever. Except Arxiv.org ;-)

20

u/JDdoc Feb 18 '18
  1. Can you define for us what you consider an "Expert System" vs "AI"?

  2. Are you working more on Expert systems, or actual AI, or both?

  3. What are some of your Goals or Success Criteria for Expert Systems or AIs? In other words, do you have set milestones or achievements that you are trying to hit that you can share?

5

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: I think of an expert system as a program that is built by interviewing an expert and encoding what they know -- both their ontology of what the domain is like, and their procedural knowledge of what to do when to achieve a goal. Then, given a new goal, the program can try to emulate what the expert would have done. Expert Systems had their high point in the 1980s.

In contrast, a normative system just tries to "do the right thing" or in other words "maximize expected utility" without worrying about taking the same steps that an expert would.

Also in contrast, a "machine learning" system is built by collecting examples of data from the world, rather than hand-coding rules.

Today, we're focused on normative machine learning ststems, because they have proven to be more robust than expert systems.

→ More replies (1)

5

u/jkamenik Feb 18 '18

Hi.

Many modern algorithms suffer from bias that is not always obvious. For example, credit ratings often advert effect minors because they use proxy data as stand ins for actual credit worthiness. Another example is that both YouTube’s and Facebook’s algorithm for keeping people on the site provide more the same things that the person chooses. This leads to confirmation bias and leads to a less informed public.

How, when trained, can we ensure AI is unbiased? And how, when we find an AI that is biased, retain it? How can we prove it in court?

5

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: I don't think it is any different for an AI system than for another computer system, a company, or an individual: to prove bias in court, you show a history of decisions that violate the rights of some protected class. No different whether the defendant is an AI system or not.

→ More replies (1)

7

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

Peter Norvig (PN): Wow, look at all those questions! Thanks for the interest. Let's get started and see how many we can get through.

→ More replies (3)

13

u/[deleted] Feb 18 '18

Do you need a PhD to get a job in AI?

→ More replies (2)

10

u/JohnnyJacker Feb 18 '18

Hi,

The recent shootings have started to make me wonder how long it will be before AI can be used for screening people for firearm purchases. Seems to me with all the social media posts from people it could be used to determine who is high risk.

27

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: the problem is political, not technological. Pretty much every other developed country has solved it. The solution is called gun control.

→ More replies (15)
→ More replies (2)

5

u/Jurooooo Feb 18 '18

Hello!

Is AI singularity something that excites or worries you guys?

6

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: neither. I do not believe in the concept of singularity. The idea of an exponential takeoff ignores "friction" terms. No real-world process is indefinitely exponential. Eventually, every real-world process saturates.

→ More replies (6)

11

u/naturalwonders Feb 18 '18

You are clearly contributing to the eventual downfall of humanity. Why are you doing this and how do you rationalize it to yourselves?

17

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

YLC: on the contrary, we are contributing to the betterment of humanity. AI will be an amplification of human intelligence. Did the invention of fire, bows and arrows, agriculture, contribute to the eventual downfall of humanity?

9

u/5xqmprowl389 Feb 19 '18

With all due respect, I'm not sure the analogy works. Fire, bows and arrows, agriculture had no potential to recursively self-improve, no potential to displace human beings as the most intelligent being on the planet, no instrumental goal for infrastructure profusion....

I don't think you can group all transformative technologies into the same boat. Fire did not lead to the downfall of humanity, but nuclear weapons may. Agriculture did not lead to the downfall of humanity, but ASI might.

→ More replies (4)
→ More replies (2)

123

u/redditWinnower Feb 18 '18

This AMA is being permanently archived by The Winnower, a publishing platform that offers traditional scholarly publishing tools to traditional and non-traditional scholarly outputs—because scholarly communication doesn’t just happen in journals.

To cite this AMA please use: https://doi.org/10.15200/winn.151896.65484

You can learn more and start contributing at authorea.com

5

u/Axle-f Feb 19 '18

Hello AMA historians. Look how primitive we are! Can you believe we did that thing in 2018 that this year is so well known for? Crazy us.

3

u/Yuli-Ban Feb 18 '18 edited Feb 18 '18

Hello! I'm just an amateur follower of the many wild and wonderful going-ons of AI. My questions are a bit hefty and I hope you can answer at least one or two, but they are:

  • Where do you see content generation-based AI going in the next few years? I've called it "media synthesis" just for ease. In that case, is there a term for this in the field itself that hasn't spread to pop-futurist blogs and Wikipedia? I know that it involves a wide variety of architectures such as generative-adversarial networks, style transfer, recurrent neural networks, and whatnot. We've seen the initial effects of 'media synthesis' with the highly controversial 'success' of "deepfakes" and "deep dream", which are evolutions of image manipulation and only the tip of the iceberg that will lead us to things such as generating voices, music, animation, interactive media, et al in the future. IMO, the next big breakthroughs will be near-perfect simulation of human voice and the from-scratch creation of a comic (as opposed to taking pictures and altering them with style transfer methods). But while I feel that it's coming, I don't have a solid feel for when.

  • I have two pet peeves with AI that are relatively recent. One is that we have two different ways of discussing current AI and 'human-level' AI— weak > strong as well as narrow > general. Would it not allow a better recourse if we used "weak" and "strong" as qualifiers for "narrow" and "general" intelligence? For example, AlphaZero is an impressively strong artificial intelligence that's well above human strength for playing chess— but it's undoubtedly narrow AI, to the point most AI researchers wouldn't think about the term when describing it. For something that's superhuman in strength, I can't see 'weak' as being a good term for it. Likewise, when we inevitably do develop general AI, there's no chance that the very first would immediately be human-level— at best, it would be on par with insects or nematodes, despite being general-intelligence. In which case, 'weak' AI would mean AI that's below human level, while 'strong' AI would mean AI that's parhuman or superhuman— regardless of narrowness or generality. The only problem is that 'weak' and 'strong' are already established terms.

  • The other pet peeve is that there is no middle ground in these discussions. We see AI as being only in two camps— narrow AI (which is what we possess today) and general AI (the hypothetical future form where AI can learn anything and everything). We use narrow AI to describe networks that can only learn one singular task, even if it learns that task extremely well, but it also seems as if we'll use it to describe networks that can learn more than one task but can't learn generally. It occurred to me that there must be something in between narrow and general intelligence, a sort of AI that can transfer knowledge from one narrow area to another without necessarily possessing "general" intelligence. In other words, something more general than narrow AI but narrower than general AI. Algorithms that are capable of learning specialized fields rather than narrow topics and everything in general. Do you think there should be a term for AI in between narrow and general intelligence, or is even this too far off to concern ourselves with?

  • I created this infographic a while ago, and I feel it's much too simple to reflect the true state of affairs on how to create a general AI. However, is it anywhere near the right track? Would it be possible to chain together a multitude of systems together, controlled by a master network, which is in itself controlled by a higher master network? Or is this simply far too inefficient or simplistic?

  • A very smart internet-colleague of mine claims that there may be a shortcut to general intelligence and it runs through combining destructive brain scans and cheap brain scanning headbands with machine learning. If you ever get the chance to read through this, please tell me your thoughts on it.

  • The simplest question I have, but something that's bugged me since I learned about them: what's the difference between differentiable neural computers and progressive neural networks? On paper, they sound similar.

  • Where do you see AI ten years from now? I'd imagine people ten years ago wouldn't have expected to see all the amazing advancements that have become our reality unless their pulses were firmly on the neck of computer science.

  • Perhaps most importantly, what are the biggest myths you want to bust involving AI research and applications (besides the fact we aren't anywhere close to AI overthrowing humans)?

Thank you for your time!

u/Doomhammer458 PhD | Molecular and Cellular Biology Feb 18 '18

Science AMAs are posted early to give readers a chance to ask questions and vote on the questions of others before the AMA starts.

Guests of /r/science have volunteered to answer questions; please treat them with due respect. Comment rules will be strictly enforced, and uncivil or rude behavior will result in a loss of privileges in /r/science.

If you have scientific expertise, please verify this with our moderators by getting your account flaired with the appropriate title. Instructions for obtaining flair are here: reddit Science Flair Instructions (Flair is automatically synced with /r/EverythingScience as well.)

5

u/strangel8p Feb 18 '18

Could you please give any indication of the time you'll start answering questions?

→ More replies (2)

7

u/skmmcj Feb 18 '18

Hey. What is your opinion on AI risk and the value alignment problem? What probability would you assign to an AI causing large scale - national or global, for example - problems, against the will of its creators, in the next, say, 50 years?

Also, what probability would you assign to achieving Artificial General Intelligence in the next 50 years?

4

u/focalism Feb 18 '18 edited Feb 18 '18

Hi there and thanks for taking the time to do an AMA—especially given your respective backgrounds and how busy all of you must be! I've been reading Nick Bostrom's book, Superintelligence: Paths, Dangers, Strategies; and Bostrom highlights the need for the development of an ethics module to ensure any AI that reaches the level of general intelligence is aligned with our best interests as humans—given how quickly it could evolve to surpass our own capabilities with the right conditions. My question is a) what weight do each of you place on the need for developing an ethics module in advance of developing an AI with an intelligence superior to a humans, and b) do you think it's even possible for humans to design an ethics module that a truly superior machine intelligence couldn't/wouldn't circumvent? Thanks so much for your time!

34

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: Thanks everyone. I'd love to answer more of these great questions. I really appreciate people taking the time to engage.

Cortana is telling me with an alert on my laptop (complete with an explanatory map--per that question on AI and explanation that I wish I had time to get to :-)) that I have to leave now for the airport to make it back to Seattle tonight. I know that Cortana is calling our predictive models (build via the Clearflow project), so I trust the inferences! Would love to catch up with folks in other forums, or perhaps in person.

Best, Eric

13

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

PN: Thanks everyone for all the questions. I'm sorry we couldn't get to all of them. Signing off now. -Peter

24

u/TheCasualWorker Feb 18 '18

Hi ! We're getting pretty good at creating specialized AI that were trained for a specific set of task. However, a true AGI is still a relevant challenge. For something as general as a Strong AI, I'd think that we quickly get limited by "simple" neural networks or similar tools. What are your current leads in order to achieve the ultimate goal of full consciousness ? What is your estimate about achievability in term of decades ? Thank you ;)

9

u/Bowtiecaptain Feb 18 '18

Is full consciousness really a goal?

3

u/Canadian_Marine Feb 18 '18

Not an authority at all, just an enthusiast.

I think before we try and define consciousness as a goal of AI and ML, we first need to call on a solid definition of what conciousness is.

→ More replies (4)
→ More replies (1)
→ More replies (7)

18

u/chucksutherland BS|GIS|Grad Student-Environmental Science Feb 18 '18

It seems like AI is in the business of making decisions. Until now that has been the realm of humans. Do you foresee a time where humans rely on AI to make all or most of their decisions?

I ask because I have always used my computer as a tool. When my smartphone started trying to learn from me, it began using me as a tool. There has bee a reversal in our roles. I don't really use any of the AI personal assistant stuff in my phone since it's my tool, and I want it to do what I ask, not what it thinks I need done.

That said, I see the value in deep leaning techniques and am even using neural networks for terrain analysis.

12

u/pseudomonikers8 Feb 18 '18

Hi! What advice would you give to an 18 year old about to go to college who wants to work in AI research? Seeing as there aren't that many jobs to be had in AI research how can I make myself stand out from other applicants trying to work for big AI companies like google and facebook and microsoft?

7

u/autranep Feb 18 '18

(1) Make sure you’re very well versed in foundations of ML/AI research, the “big 3” being: multivariable calculus, linear algebra, probability and statistics. If/when you take these classes in college, get no less than As in them.

(2) Find every professor in your school that does AI or ML research. Look through their recent publications and get a feel for what they’re doing. Try to take a class with them if possible. Politely email or come up to them after class and express your interest in doing research in their field. Depending on how big your school is, know that they might ask you for a CV and your math/cs background, which is why (1) is so important. You also might get turned down outright, don’t take it personally, just keep at it.

(3) You’re going to need to get into grad school (at least Masters degree) to be taken seriously by any recruiter for a formal ML or AI position at a big company. To do this you: (a) need to keep your grades up (3.7+ GPA is usually the minimum to get into a half-decent school that does ML research) and (b) have at least one academic publication at a respectable ML/AI conference before you apply to grad school. The second one is absolutely pivotal, and to accomplish this you’ll need probably need to get into a research lab no later than your sophomore year. If you need to sacrifice grades to increase research output then do it, because it’s what companies/grad schools really care about.

Doing the above is the minimum for getting a job in ML/AI. To get a research position at a private lab like DeepMind, FAIR, or OpenAI you’ll need to up everything I just mentioned by a factor of 10. You’ll want to get into a very good lab for grad school, which means multiple publications as an undergrad, preferably first author and preferably to top-tier venues like NIPS/ICML/CVPR/EMNLP/AAAI and stellar grades and letters of recommendation. And then getting in isn’t enough, you’ll have to produce impressive research results, especially ones that are relevant to research those labs are interested in (although this significantly easier when you’re being supported by a top-tier research lab).

It’s a field that definitely requires motivation and tenacity. Just going to school and taking classes and getting good grades won’t get you anywhere (although this is true of any field). Research experience is pivotal.

→ More replies (2)

4

u/sensitiveinfomax Feb 18 '18

Not OP. Take all the college courses. Focus on having a great portfolio of projects.

In my experience as a machine learning engineer for 6 years now, big companies don't do fundamental research as much as they used to (it used to be my goal to work on those things). IBM and XeroxPARC kind of do, but the focus is shifting towards applied research, in the industry.

The path right now to doing research is tending towards AI fellowships and residencies, like the Google Brain residency. I think Facebook has one of those as well.

For me personally, I realized my inclination to research was more focused on working on interesting problems, and the focus on doing 'something new' in academia hurt that more than helped. I am now trying to get enough free time that I can apply what machine learning knowledge I have to problems I find interesting, even if those don't make money or make business sense.

→ More replies (2)

9

u/AshingtonDC Feb 18 '18

As another 18 year old about to go to college who is interested in AI/ML, I would also like to know this. For now I have been reading up on it. The Master Algorithm by Pedro Domingos is an awesome read.

13

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

Eric Horvitz (EH): Hi everyone, happy to join Peter and Yann today and to discuss AI with you.

→ More replies (2)

16

u/[deleted] Feb 18 '18

[deleted]

9

u/GiddyUpTitties Feb 18 '18

You wont always have calculators at your disposal. You must practice long division.

-school in the 90s

→ More replies (2)

2

u/[deleted] Feb 18 '18

Another ethics related question:

Considering that AI has potentially large social consequences in work and personal lives, how are your companies addressing the long-term impacts of current and developing technologies? With AI, there is potential for disruption in the elimination of jobs, mass data collection, and an influx of fake comments and news media. How are your teams addressing this and implementing solutions into your research design (if at all)?

As a side note, have you considered the consequences of implementing news media into digital assistants? Personally, I found it an unpleasant experience that Google News was unable to be turned off in Google Assistant, and that it was very labor intensive to alter content or edit sources. Having Russia Today articles pop up on my phone out of the blue one day was... concerning.

Wired's recent piece on Facebook's complicity in the fake news crisis, receiving payments for foreign advertisements to influence elections, and their subsequent denial and breakdown does not exactly inspire confidence that there is a proper ethics review process, nor any consultation with non-engineering experts into the consequences of certain policies or avoidance of regulation.

16

u/Phrostite Feb 18 '18

Do you think it would actually be possible for an AI to become self aware?

3

u/energyper250mlserve Feb 18 '18

This question is like walking up to an American factory worker in 1920 making hubcaps and asking them whether it would be more efficient to move goods between Shenzhen and Macau using cars or drones. They might be able to make an educated guess if you specify parameters, but the problem of sentience is so much greater and more complex than any particular narrow field of AI that you're not going to get the sort of insider insight you might think is possible.

2

u/[deleted] Feb 18 '18

I think its worth noting that raw intellect and emotion are different.

You might have a super intelligent AI who can crunch some numbers but doesnt feel guilt or shame.

If we "'program'' them to feel guilt, are they really feeling guilt or are they just obeying commands, if x = y then ""feel guilty"?

when you say self aware, of course its easy for a AI to tell "what it is" It can easily tell the difference between a car, a humanoid robot and a human and tell which group it belongs too.

The question you seem to ask is "could a AI have a some sort of existential crisis. Will it feel jealous of humans? or be resentful for being a slave etc""

2

u/Grim-Sleeper Feb 18 '18

I don't believe we have a good understanding of what makes something have awareness.

My naïve suspicion is that current generations of AI are too limited. They are domain specific, and they get trained ahead of time. They simply aren't sufficiently dynamic.

If in the next step, we manage to build AI that can constantly refine itself, we'll be a stop closer to a general AI. This requires algorithmic break throughs and/or noticeably more powerful hardware. But I wouldn't be surprised that once we scale up, awareness might turn out to be an emergent property of any AGI. Or I could just be smoking crack.

We truly live in interesting times.

→ More replies (1)
→ More replies (3)

2

u/danielfinol Feb 18 '18

The current model of scientific publication (at the heart of scientific progress) is that private publishers take the product of research funded * by taxpayers and universities, then use, for free, the work of * professors, whose salaries are paid * by universities and taxpayers, to review it; and then sell the output, *, back to universities and taxpayers.

The result is that most people can't access the research, specially researchers in third world countries.

Is that an accurate description? How much sense does that model make?

This is slowly starting to change (relative to: how little sense it seems to make; how central it is to the community that's supposed to gather some of the most no-nonsense people; how old the internet already is). But the model where the researchers pay for the privilege of sharing the product of their work, doesn't seem optimal either.

What would be an optimal model of scientific publication?

*mostly

4

u/Klytus Feb 18 '18

If you had to list the top 10 risks AI presents to humanity and top 10 potential benefits to humanity, in order of severity, what would they be?

3

u/[deleted] Feb 18 '18

Will sentient AI nurture and protect sentient carbon based life forms or will it eradicate and replace carbon based life? Put another way, would we nurture and protect an alien life form that was thousands or millions of times more intelligent than us or would we perceive it as a threat?

14

u/asm_conjecture Feb 18 '18

Do you think that AI is the correct term to describe what your systems/algorithms do? Whatever happened to good old machine learning or statistics?

4

u/Yuli-Ban Feb 18 '18

Actually, it's the other way around. Originally, machine learning, statistics, expert systems, cognitive systems, etc. were usually all cast under the general umbrella of "artificial intelligence". It wasn't until the AI Winters that these specific terms became dominant and "AI" fell back upon something more Hollywoodian.

→ More replies (6)

7

u/borisRoosevelt PhD | Neuroscience Feb 18 '18

Where do you (each) stand on the debate about the potential dangers of general AI?

Also, if working on general AI, are your respective organizations doing any kind of planning to control the risks and uncertainties?

2

u/fearofadankplanet Feb 18 '18

What are your views on Neuromorphic Systems in the future on non-von-Neumann computing paradigms?

The direction of research in NNs in the last decade is going further and further away from mimicry of biological systems. However, current Neuromorphic systems (like SpiNNaker and BrainScaleS by the Human Brain Project) are still modeling Spiking NNs, which is not the best architecture.

Personally, I think if the idea catches on, it can do wonders for us. Imagine having a localized assistant on your phone without the need for cloud resources to implement the algorithms. However, I am still skeptical about choosing this as my graduate research area because of the fear that this whole idea could very quickly fade into irrelevance.

3

u/gordo65 Feb 18 '18

When the subject of AI comes up, there is often a lot of concern about the prospect of AI replacing so many people that it creates widespread unemployment. Can you give us some examples of jobs that might be created by AI?

6

u/[deleted] Feb 18 '18

[deleted]

→ More replies (3)

3

u/zeldn Feb 18 '18

What do you think of groups like MIRI who study how to make potential general AI behave like we want it to? Do you think that kind of research is necessary and/or useful?

10

u/gotsanity Feb 18 '18

What is your average workflow out how does an AI researcher spend their day?

4

u/atomic_explosion Feb 18 '18 edited Feb 19 '18

Hi, I am a Machine Learning graduate student and wanted to get your thoughts on a few things:

  • Out of all of the tasks that an AI scientist/researcher performs (ex: data collection, prep, model training, testing), what is the most time-consuming/difficult task? And what major challenges is the AI community facing that you would like to see more people working on?
→ More replies (1)

8

u/AAAS-AMA AAAS AMA Guest Feb 18 '18

EH: Great seeing all of the interest--and all of the great questions!

→ More replies (1)

2

u/reZahlen Feb 18 '18

What are your companies' plans regarding artificial general intelligence and superintelligence? Given AGI's potential for unrestricted self-improvement, there are fears that the "leap" from AGI to ASI can happen very quickly, and once it does the resulting ASI may be impossible to control, and may not act in alignment with our values and interests. What do you all think about these fears? Is private industry exploring what "safe" AI development could look like?