Artificial Intelligence 4:Lord Rabbi Jonathan Sacks on Morality 6th September 201818

Artificial Intelligence 4:Lord Rabbi Jonathan Sacks on Morality 6th September 201818

SUBTITLE'S INFO:

Language: English

Type: Robot

Number of phrases: 1025

Number of words: 6988

Number of symbols: 32265

DOWNLOAD SUBTITLES:

DOWNLOAD AUDIO AND VIDEO:

SUBTITLES:

Subtitles generated by robot
00:03
the 20th century brought some of the toughest moral challenges humanity has ever faced in this series I've been looking at some of the emerging moral issues facing us in the 21st century and one of the most significant is the ethics of algorithms and artificial intelligence AI is already fundamentally transforming our world and in the coming years will have an enormous impact on almost every aspect of our lives so the ethical questions surrounding its
00:35
development are urgent and important as historian Yuval Harare asks in the closing sentence of his best-selling book homo Deus what will happen to society politics and daily life when non conscious but highly intelligent algorithms know us better than we know ourselves to deepen my understanding of the ethics of AI I went to talk to Mustafa Suleiman co-founder and head of
01:06
applied AI a deep mind one of the world's leading innovators in artificial intelligence I began by asking him what exactly were the ethical dimensions he'd foreseen from the beginning it was clear to me that in machine learning and AI we were designing tools systems that were explicitly intended to replicate human capabilities and I think that brings with it great promise and huge potential to do good we can do things more efficiently more accurately I hope in
01:38
the future more fairly but at the same time who owns and controls these systems whose values do they represent who's been excluded when we've designed these systems and so it's clear to me at the beginning that there were lots of very very concerning moral questions so talk us through some of the potential benefits you see emerging of decades from the applied use of AI well one of the enormous benefits that we're already starting to see is that we've been able
02:08
to train a whole range of algorithms to better diagnose the different diseases you've got from a series of ice cans so we can now look at a 3d scan of your eye and diagnose upwards of 40 or 50 different pathologies including some of the most dangerous blinding diseases like diabetic retinopathy and age-related macular degeneration and this is obviously really valuable because if we can diagnose those diseases more cheaply we can intervene more quickly and earlier in the
02:39
degeneration of those diseases and with very cheap treatments we can go a long way to saving people's sight but at what point do we begin to hit earth achill problems when treatments for instance are based on an algorithm that we don't understand I mean how would you feel about being diagnosed and operated on perhaps entirely by algorithm without there being a human being anywhere in that process I think that's a great question to break it down there are two
03:11
different components there the first is diagnose the second is operate on in diagnosis we're attempting to make sense of the data the raw facts before us and that information should be presented to the human to the doctor to make the final decision and then take the intervention it's not clear to me yet that we're going to have algorithms in the next few decades that are capable of making that intervention independently and autonomously and I would argue to the extent that we do have those technologies we should be
03:41
extremely cautious about taking the human out of the loop I think a human should always remain in control with full oversight ultimate responsibility for these systems so am i hearing from you that there is an inescapable ethical dimension here and that has to involve human beings in the loop with Biggers what concerns me is that presumably you can write an ethics code into any artificial intelligence operation but the question is how can you write
04:12
empathy into machines that can't feel you certainly can write values into systems and we do so all the time in fact one of the reasons why we need to ensure that the technology companies that are developing these systems actually wrap and in terms of gender and race but also importantly in terms of class represent the broader society and stakeholders that we seek to serve with our products and services the biggest danger is that we use these tools to entrench our
04:43
existing biases and compound the injustice that we already see in the world around us these systems are trained on past human data and that data represents the same kind of injustice --is that we would like to fix today so what they're likely to do often is to reproduce the biases and the blind spots that we as humans have already had in running our existing systems and that's been recorded in the data and subsequently used to train the algorithms Yuval Harare makes a very
05:15
interesting point when he says that in the past bias was directed against groups and so groups could get together to fight bias but artificial intelligence will tell us the bias can be applied to individuals knowing everything we've done in the past it will make predictions of what we do in the future or for instance illnesses that were likely to get which might make us uninsurable and individuals as such
05:47
are not able to fight bias in the same way does that concern you yes I think one of the biggest moral challenges of our day is that over the next few decades we're going to have enormous visibility as to the day to day in justices and unequal treatment that we experience when we enter the healthcare system when we go through education even when we drive around the city in our cars and that's because we're starting to collect and we have collected vast
06:17
amounts of data that describe in enormous and very rich detail our everyday experience not just with digital systems on the big platforms but also when we go into hospitals and schools and we experience welfare services now at the moment we don't quite have the tools that allow us to analyze all of that information but in the coming years we will and what it will show I'm sure is that there is an enormous variation of experience in our welfare systems in our penal systems in our schools and in our
06:49
hospitals and addressing the injustice of that unequal distribution of resources if you like across the country targeted at different groups based on their gender based on their race and ultimately targeted at individuals is going to be a really difficult ethical challenge how do we intervene to rebalance the distribution of those resources given that we can see with our own eyes from the data how unfair and unequal the treatment is likely to be across the country Moustafa is very clear that as well as
07:21
the immense benefits of artificial intelligence there are dangers we have to guard against I wanted to know how the next generation feels about these risks and challenges because they're going to be deeply affected by them in the future so I spoke to some sixth form students from Queens school in bushy Hartford cheer to get their views did some of the things that Mustafa was talking about excite you it's this opening up possibilities in your mind Damon I definitely think it has the
07:52
opportunity to increase the public good to an extent that's never been seen before in that people say with the iris scans the fact that we could spot or even cure diseases how could that be seen in any way as a bad thing yeah Tony I really agree with Damien I think when people talk about programming empathy into a I actually think one of the advantages of AI is the lack of empathy you know I don't think it can be programmed into it and I think the jobs that are human emotions get in the way of are exactly what we want computers to
08:24
do personally I think you have to draw the line and say well where is it useful and where it's actually harmful so in something like the NHS or public goods where it's good for it to be able to be almost neutral and just look at data and process it in things where you'd have to have like like you know like when you're informing someone if they've got something wrong with them I don't think that should be automated because then I think you do need empathy it's clear that we could in the future delegate all sorts of stuff away to artificially intelligent machines
08:55
I mean parents could program their artificial intelligence their their voice-activated device to sing songs or tell bedtime stories to kids there's no doubt that some kind of technology like that will be used for elderly people living alone we've had the whole issue of diagnosis which we discuss with Mustafa where computers might just be better at diagnosis than doctors are where do you draw the line and say this kind of a
09:26
responsibility we cannot give away damien if we say with parenting i think it's equally as possible to say abdicate the responsibility of say cooking or cleaning but I don't think in the raising of children in the socialization of children that is a kind of responsibility we can't abdicate because it's so fundamental to society whereas if it's with doctors the diagnosis perhaps may be given over to artificially intelligent robots but not in the relaying of that information or the care given to patients Ruki well the thing is that we can't
09:59
draw the line I think that the option should always be there for example for a parent to use AI to raise their children if they wish if they wish to do that as long as that options there then it's good we should then trust the humans to do what they want with that AI Sabina I think it's kind of sad that we're using AI as a source of comfort now because everyone it's parallels with like have it being brought up to have a business mindset everyone's so focused on making money and I don't blame them because they're trying to put food on the table for their kids but I think it's also sad that what comes with technology is now
10:30
we're using this as a means to take away from our like moral responsibilities for example Alexa and Siri reading bedtime stories I think that's a very important thing for parents to do I think that's where we do our human connection from Tom I really agree with vena that all the stuff with technology nowadays is all about efficiency efficiency and I don't understand how you can efficiently raise a kid I think that's part of the human very human experience too you raise someone and it's sort of what is
11:00
all it for if we're just going to break down our own social interactions and just do everything with robotic precision Ricki but the question isn't how we're gonna stop it from influence the way that kids are parenting and stuff stuff like that just seeing how we can use it to supplement the way that parents go about parenting rather than how we can stop it Damien and if you were to say have artificial intelligence that could replace a parent or a worker what is the point in even
11:32
having children or go into work or won't be the point in anything it seems as if these basic human interactions are fundamental to what it is to be human Ricky I'm sure people like the centuries ago thought that it is to be human to hunt for example we need to keep punting and stuff like that I feel like right now we think it is to be human to not give these responsibilities to AI but the thing is AI is calm though in the future way is to be human it's going to be different to know it's just how can we accommodate that instead of stopping
12:03
it Ricky I think that sort of lazy acceptance that AI is coming is exactly the problem I think there needs feeling of agency amongst people to actually push back against the bad factors like with parenting I think it's already gone too far I've sound already like an old person yeah going in so it wasn't like it was in my day but you know you see parents and their kids being annoying so they just give them an iPad and it's like a pacifier for anyone under 12 Ricky do you want to get back on that yes I said AI is coming but that doesn't necessarily mean I mean it in sort of
12:34
like AI is coming like it's an apocalypse sort of thing I just mean AI is going to be an integral part of our society so I don't see any way in which we can stop it but more a way in which we can make a part of our humanity if the increasing reliance on AI is inevitable how should it be regulated one of the problems is that the development of AI is fast and the precipitous Latian and regulation is slow so every law pass may no longer be
13:04
relevant to the latest technological advance how then should we be seeking to control this extraordinary technology his Mustafa Suleiman of deepmind again I think part of the challenge with respect to legislation over the last few decades is has been very much move as fast as possible toss things over the fence see how consumers use the products that have been developed and then iterate and improve on those products after you've
13:34
observed their impact on society and increasingly I think the scale of impact of the tools that we're developing is such that we can no longer rely on just tossing them over the fence technology companies have to be far more thoughtful far more patient far more sensitive to the potential prospect of impact that these sorts of technologies will have on broader society rather than just deploying them in a somewhat more blind way but that's going to involve all sorts of companies in a major decision
14:06
which is how do you balance profits on the one hand and ethics on the other how do you absolutely I think now more than ever we need to create a semipermeable membrane around the development of these sorts of technologies they're so powerful they're so significant they have such fundamental moral consequences that we have to involve religious leaders community leaders civil rights groups NGOs activists we need these systems to be accountable to our
14:38
collective values and hopefully they can begin to spark renewed interest and debate in the ethical and moral consequences of technologies of this kind of scale known Sharkey once said that this whole technological revolution is only going to work if it maintains public trust don't you feel the public suspicion has grown so much that these companies are losing public trust I think there's no doubt about it Silicon Valley is in a crisis of trust
15:10
in fact and people are unsure what data has been collected about them how it's used what access they have to that data the controversies of the last few months have obviously proven that and that's why I think now more than ever we have to be involved in understanding the details both the technical details around where that data sits and what rights I have to it but also the legal details the terms and conditions of using these sorts of platforms and these might sound like dry technical and legal questions but that's where the
15:41
morality really sits if we want to be responsible citizens and we care about the morality of these challenges we have to get involved in the details just as we have to get back involved in politics that's how we improve the world since our conversation the British government has published its data ethics framework setting out moral guidelines for the use of data so that AI and ethics can go hand in hand eventually though we're going to face an even more fundamental
16:12
challenge in some respects computers are already smarter than we are what happens when this is true in every respect are we setting in motion as a nick bostrom was one of the first to warn of the dangers of super intelligence I asked him whether we'd simply not fall through the consequences so far this approach of just trying stuff out has been the key to unlocking this modern prosperity that
16:45
we now take for granted so it's been a gamble that has so far worked out brilliantly and I just think how confident can we be that it will for always continue to work out brilliantly like could there be some possible discovery such that any primary civilization makes that discovery it inevitably destroys the civilization you could imagine that kind of black ball being extracted from the great urn of invention and at the moment we are reaching in pulling out one ball after another and they have been for the most
17:17
part beneficial maybe some grey balls but on violence usually beneficial but if there is a black ball in there it looks like we are eventually gonna pull it out and what we don't have at the moment is the ability to put the ball back into the urn we can't uninvent our inventions so we just have to hope and what really worries you is that for a long time this artificial intelligence is doing the kind of things we want it to do but there becomes there comes a moment
17:48
when it might take what you call a a treacherous turn so can you give us an example of a treacherous turn well it's the idea that with other technologies we have they are dumb passive objects that sometimes misfire and cause us problems but they are not actively strategizing and plotting to thwart our attempts to intervene at them if you're dealing with a human adversary then it's different they can anticipate your actions and it's much more complex strategic problem
18:20
and the point here is that this technology is unlike all the others in that once it reaches a sufficient great level of intelligence it can engage in strategic behavior that could involve deception deceiving its programmers about its true values if it predicts that that will then enable it better to realize those values and you worried that even if we have a kind of big switch that would switch it off it would be bright enough to persuade people to switch it back on again but that that would be an obvious concern if you're
18:52
really talking about something that is marker than humans so I think we shouldn't rely on that but instead try to design the system in such a way that it is an extension of our own will that it is on our side so that even if it escapes from all the physical safeguards that we have it will still be beneficial because it doesn't actually want to harm us it wants to help us it wants to do what whatever the intentions were that we embedded in it that is now a research
19:21
field the study of scaleable control methods which has sprung up and it's encouraging to see some really bright people now working on this problem if I can just take you back from this future singularity that we don't know exactly when it's gonna be to what's actually predictable because it's beginning to happen now namely artificial intelligence as we have it whether it's used for medical diagnosis or autonomous vehicles or whatever do you have
19:54
concerns about where AI is going I'm actually quite optimistic about the near-term impact about AI and I should say because we spent some time talking about possible risks and stuff that I'm also really excited about the upside of this if we get it right in the book more pages are devoted to the risks and the downsides and that's because I thought it was key to have a more granular understanding of where the pitfalls are so we can make sure to avoid them it doesn't mean I don't think there is this plausible
20:26
enormous ly good upside so I want to just emphasize that because sometimes I get kind of mistaken for a sort of doomsayer with regard to AI know I think the near term is overwhelmingly positive there are applications across really all sectors of the economy and society that you could imagine better diagnostic tool things that very exciting but they're really useful so if you have a big logistics center if you can better predict consumer demand you can cut cost
20:57
and make your products available more cheaply to more people so self-driving cars if you could get that to work every year 1.2 million people die in road accidents around the world you know with self-driving cars you could cut that right down maybe reduce that by 99 percent and you name it you can go through almost any area of activity that that we humans are struggling with and hey I could help how concerned are you on the impact on society for example of progressive numbers of occupations like
21:27
drivers if buses and lorries and tack of rendering a lot of medical staff redundant because of the diagnostic acumen of AI how far are you worried about this sort of specter of mass unemployment of a lot of people who just don't have anything to contribute because AI can do it better than them so I think over the next few years I don't expect a I'd have a huge impact unemployment eventually yes but I'd say
21:58
that as a potentially good thing in the sense the goal is full unemployment the reason we invent technologies is that we can do more with less we can achieve more of what we want with less labor and the logical point of that is being able to do anything we want with no effort at all like we can all play and relax rather than work don't you see in fact work is essential to human dignity well I think are at least two functions of works one is a source of income obviously but in this hypothetical
22:30
scenario machines could everything imagine that that problem he saw then there is it is true the second function of work as a source of meaning dignity and something to do and that will require a fundamental rethink of our culture what we place value on and and we know that there are various groups aside from classical aristocracy that don't work for let's say children they seem to have worthwhile lives they don't produce anything of economic value some retirees if they are healthy and have
23:01
many friends enjoy life and so we would have to kind of reallocate people to learn to find meaning and worthwhile activity outside the need to earn income are you worried about the fact that AI will allow perhaps already is allowing certain governments unprecedented abilities of surveillance which could be used really in the most repressive way that the degree to which people have a most personal data on the one hand the
23:33
ability of facial recognition software to pick out a face in a crowd on the other is giving governments enormous potential power to restrict our freedoms I think that is so on not just AI but other technologies like cheap cameras and data storage and so forth and we don't really know what happens if you turn one of these knobs on the big panel of sort of social dynamics we don't have the kind of political science that can
24:03
tell us with accuracy how does society change how does politics change if we change one of the fundamental parameters like the ease with which you can track people's past behavioral records we know in the past technologies have changed political dynamics profoundly say with the invention of Agriculture and writing you suddenly got States and social stratification with gunpowder and the feudal era as far as the sort of value codes that are written into new technologies I think should be writing
24:36
those who should be being consulted is this for governments is it for academics who is going to tell us what values we should program into artificial intelligence so that we can make decisions for instance when it comes to a conflict between an individual right of privacy and the common good which could emerge from governments having access for instance to war our health data I think we have to look at this a
25:06
little bit on a case-by-case basis so with most technologies more incremental technologies I think in general it's good to initially have it fairly unregulated so people can experiment and see what works and then if problems emerge they need to be dealt with ideally by some people voluntarily with necessary by government regulation I think the prospect of super intelligence might be different in this regard in that it might be so revolutionary that it's key to get it right on the first attempt UVO Harare is said that we are
25:37
going through this momentous decoupling of intelligence and consciousness we always thought what made us human was there we are intelligent and we're conscious but if it's intelligence that define us is such we've already lost that battle so he is worried that we're going to have to think through fundamentally what makes us human so can I ask you what do you think makes us human sometimes we refer to humanity as the set of things that have sort of
26:08
moral significance but that I think could in principle include things that don't belong to our species if say we found in some remote caves Omni and earth alls having survived and they could learn to talk and they were sort of human level in their capabilities I think they should be accorded moral status even if they weren't member of our species and potential even you could imagine these digital systems if they became sophisticated enough like really developed a full panoply of human cognitive capabilities or even maybe
26:40
those of animals they should then be accorded a moral status equivalent to the animal in question or the human in question that one shouldn't fundamentally discriminate on the basis of which atoms are used to realize a particular mind but it's the thoughts the thinking the awareness that determines whether something has moral standing this thought that artificially intelligent systems might one day have moral status like humans raises the question what actually makes us human
27:10
that's the question I put to the students from Queens school for you what is it about being human that we must not let go of Damien humans have always understood themselves and created society on how they interact or how they act on the world I think that's what it is to be human if to act on the world if that means creating artificial intelligence without safeguards I think it would be it would be too limited to say we need to have a fear of this black
27:42
ball surely it would be far better to risk everything than to gain nothing birth for me if you look at the question of intelligence so what does it mean to be intelligent well you can say well actually spin out wheel off loads of facts actually for you sending an exam if you can have a nuanced opinion about something or you can just wheel off facts one of them I would consider is actual intelligence about to look at something look at the different sides of it come to an opinion and judgment based on it Tom I'm one thing that Nick said that really interested me was when he was talking about the potential for a future society in which robots do all
28:14
menial tasks for us that's something that made me think about what is to be human because I don't know where find meaning without work I guess we would look for productive things but I don't know whether absolutely everyone could do that I've yeah it just makes makes me wonder Damien I definitely think a future in which all menial tasks are taken over by robotics that could potentially overcome alienation and people could engage in creative
28:45
imaginative personal artistic productive work as opposed to monotonous drudgery that definitely think we see in our industrial society should be not this might be a bit of a utopian solution but I think once tech advances sufficiently enough it can take over our like measly jobs if you will like in terms of like tidying up the house and stuff and this gives us more time to like interact with our family and who like doing human stuff Jenna I agree with Damien and that
29:18
I've obviously been very negative about AI but I think the only good thing would be if every job was taken over by AI because then people wouldn't think that the meaning of life was to make money and we'd be able to find pleasure in other things as you spoke like the arts and stuff given that we're going to have to program morality into this artificial intelligence but let's see would you choose to yeah what's on this who should be doing this Damien I know it's really difficult to talk about objective like morality but I
29:48
think there is a you all find if it's some kind of basic principle or moral that we can apply to every situation I think it's certainly an interesting question to see whose most almost not the right but he should be allowed to them put them Mo's into the program it's the people that created it is to say the government that would then be taken away of his people that produced it I think I'm quite looking forward to seeing how they're going to figure out that question Ricky I think they're like the perfect model of an AI or something like that should be put down to the law of
30:20
the land in which that AI is as long as it acts in accordance to the legal structure of that country I think it's fine because if we have different laws for different countries then we should have different AI is acting differently in different countries German but the law isn't always our morality so are you putting morality into the robots are you putting the Lauren because they could be completely different things Ricky as I said to put morality into AI is too too big a question so this is just a more practical solution to the answer because we're never going to be able to find the perfectly moral person that we can integrate into artificial
30:52
intelligence Bethenny put in the law of a particular country into the AI is practical because if you've got a country that say doesn't have a democratic process so when you elect your MPs they're people that you agree with your morality then you get the general consensus of the country if you don't have those democratic processes then you've got a government that's more dictatorial and they've then been allowed to put their law into artificial intelligence which would have the capabilities to do so much damage I'm not sure you could argue that it's a practical response to put the law into artificial intelligence so far we've been talking about fairly benign
31:25
developments in artificial intelligence but of course it's conceivable that the people who will be most interested in developing this might be highly repressive governments for instance that would want to be able to track the thoughts and and deeds of potential discs and they will be given unprecedented control by this technology governments that frankly held warlike attitudes to neighbors who would be
31:57
thrilled to be able to develop autonomous weapons we're dealing here with intelligence devised by us by human beings and human beings have not been totally benign and benevolent throughout history so what about your worries on that front that these could be used quite clearly for destructive purposes visa vie the people not like us Bethenny I think it's my son/daughter cynical but I personally would say Michael Thomas I
32:27
wouldn't trust anyone with them because even international bodies say like the UN everything has their own agenda or anything you could ever trust anyone to be completely neutral to be able to then like use those weaponry with distinction between what is right what is wrong riki I think that we have to trust people and I think we have to trust our governments in terms they say I obviously is awful lot of machines at the moment anyway that could do harm so autonomous weaponry of course is on another level one could do much more harm but if we don't could not trust our
32:58
government the people that are supposed to be leading us something but who are we gonna trust [Music] there's a famous story at the beginning of the Bible about Adam and Eve and the forbidden fruit whose essential message is that even in Paradise there are limits some things we should just not do I asked Nick Bostrom how we can ensure we'll always remember that there are limits which may be the only thing that will save us from creating something dangerous just to show we can I wish I
33:29
knew the answer to that I mean one is recognized where the limits are and not false limits but the second is even if it was generally recognized where some limit was how do you get everybody to agree not to step over it given that there are so many different people and different nations each with their own opinions about this matter and we don't really have very much of an ability at the global level to adopt a coordinated approach to these things so things will happen if at least one significant group thinks they are worth doing I think the
34:00
most poignant sentence in all of religious literature it occurs in Genesis chapter 6 where God sees exactly what he's done by creating us and it says in God regret it that he had created humankind might we one day regret that we had created something that was cleverer and altogether more gifted than humankind it is possible I think also there is the possibility we
34:30
would make an enormous mistake by not doing it even if maybe we would never find out how big the mistake was the thing I would focus on though is not so much trying to either speed up or slow down the progress towards AI because I just don't think we have very much influence over that given the enormous drivers all across the world commercial scientific pushing it forward but rather try to accelerate the things that could help increase the chance that it will be for the good so that would be solving the problem of scaleable control and it
35:02
would be solving also the kind of political problem of forming some sort of reasonably cooperative approach to developing this and using it for the common good that was the view of a philosopher I wondered what must our Sulaiman felt as someone at the cutting edge of this technology have we created something uncontrollable no we've certainly not created something we can't control narratives that have speculated about a long-term future where autonomous super intelligent systems
35:34
could be independently roaming around our world have agency over us and have more power than our species I think that sort of speculation does a disservice to the more practical real-world near-term ethical consequences of the collection of data at the kind of scale that we've talked about and so my concern is to focus on how we can really ensure that these algorithms are contained and controlled their agency is restricted and they're focused on the real
36:06
challenges that we have today we have 800 million people who don't eat well every night who are malnourished we have 900 million people who don't have access to clean water every night we have all sorts of diseases we have raging child poverty even in this country 5 million children in child poverty these are serious issues that we should be focusing our very best minds on that's where my morality is centered on today so we must never relinquish moral responsibility we must never lose sight
36:36
of human dignity and what preconditions physical and economic those require and we just have to get active and not hand this away to the big corporations or some remote panel of ethicists that's right every single one of us is responsible for the morality of our everyday systems and I think one of the sad things about the last few decades is that not only have we become apathetic with respect to our politics but we
37:08
appear to have also become apathetic with respect to our everyday morality morality is the center of our human existence it's what makes us all wonderful as a species and in order to steward it and nurture it and protect it we all have to be engaged in that practical ethical question every day what more can I contribute how can I make the world a better place we have to be engaged with that question and that means being part of politics means part of our civil society of our
37:38
NGOs and that's really how we improve the law of our species I was really moved when Mustafa said that morality is at the center of our human existence but I had one more question to put to each of the students given all these unknowns taken all in all does the prospect of artificial intelligence excite you or scare you Ricky I think it really excites me at the moment but there's one particular thing that really scares me
38:10
and it's just the words of those that are already experts in his fields people like Bill Gates and Elon Musk saying that it is the greatest threat to humanity because they're the type of people that you would stink telling you what AI is like so in the in the short term I think the improvements in AI are amazing medically and even socially but I'm not too sure about the distant future bethenny statement and whether or not it would say excite me or make me fear it because I think in certain places it's
38:41
certainly going to be useful and it's certainly gonna give a lot of hope to people but I think talking about to say autonomous work away there's nothing about that that would make me excited by the fact that it's getting more efficient or the ability to use it is there going to become more widespread Damian I don't think you have to be a specialist or an expert to pass judgment on say artificial intelligence I mean nobody in this room is an expert and here we are discussing it so I think even your bog-standard person off the street could give you an opinion of it and even if it was entrusted to them
39:12
could decide how we wanted to organize society with this amazing development Tom I'm cautiously excited about AI especially in the field of ironing out human error I think like any advancement in technology it can be used for bad and I think that the challenge is just fighting that battle Chandler I'm completely terrified of AI because I think that power can corrupt and I think that if these politicians and governments have the chance to use it than they probably could and I think it has over
39:43
gone too far Sabina I think it's rational for anyone to be a bit scared of what we don't know because we don't know how far I could get considering how they learn from their mistakes and you know learn from what they're doing but I think it can also be seen as a form of liberation in taking over day to day boring jobs I love the story about a woman who became a mother for the first time she said now I've had a child I can relate to God much better now I know what it's like to create something you
40:15
can't control only time will tell whether in AI we've created something we can't control what we can and must control is how AI and data are used because already there are companies that know more about us than we know about ourselves human beings at the only life form thus far known capable of asking the question why which means we can choose our fate in the full dignity of responsibility never forgetting that machine's were made to
40:46
serve human beings and not the other way around if we forget what makes us human we may one day endanger the very future of humankind turning to that future tomorrow in the final program of this series we'll look at our sources of hope and inspiration who today are our moral heroes you you

DOWNLOAD SUBTITLES: