by
Dr. RGS Asthana
Senior
Member IEEE
Figure
1: Perils of AI [23]. Is it true?
Summary
It is
possible today to program robots or, in general, we refer them generally as the
machines or automation to do a lot of repetitive works for us but for doing
this they need not be super intelligent. This is an issue of management and we must
do it well. However, after a few decades
and looking at current pace of progress the intelligence imparted to machines
by human beings is going stronger enough to a level that it may be of concern. This level of intelligence is referred to as
the super intelligence which may someday take on human race and this thought is
bothering people like Elon Musk and many others.
Big Data [27, 36], Cloud Computing [33], InfoGraphic [34], Internet of Things [27, 35] and AI technologies
when used together may show very interesting results [27, 32]. In fact, humans, are limited by slow
biological evolution, and couldn’t compete with ever evolving AI and would in all
likelihood be superseded one day but when nobody knows for now.
Keywords
Artificial Intelligence (AI), Weak
AI, Strong AI, Deep Learning, Reinforced Learning or (RL), Generative
Adversarial Network (GANs), Machine Learning, Neural
Networks, Natural Language
Processing (NLP), Big Data, Cloud Computing, InfoGraphic, Internet of Things
Prelude [1-5]
Elon Musk is a very future-oriented tech entrepreneur, as
evidenced by Tesla and SpaceX. Even so, Musk has always been vocal
about his fear of a world take-over by
artificial intelligence (AI). Curiously enough, his solution to keep the AI
doomsday away is to merge the human brain with machines [3]. Some of the most popular sci-fi movies — 2001: A Space Odyssey, The Terminator, The
Matrix, Transcendence, Ex Machina, Transformers, Robot and many
others — were created assuming AI will progress in time to a level at which
humanity will lose control on its own creations, leading to the end of entire
civilization. AI [37] has
recently attracted attention of even high profile figures including Stephen Hawking, Bill Gates and Elon
Musk and they have voiced concern about the
perils it may bring with its advancement.
A few
quotes given below illustrate the perils of AI as well as comments of an AI
expert saying that they are not correct:
‘If AI power
is broadly distributed to the degree that we can link AI power to each
individual’s will — you would have your AI agent, everybody would have their AI
agent — then if somebody did try to something really terrible, then the
collective will of others could overcome that bad actor.’
—
‘I think we
should be very careful about artificial intelligence. If I had to guess at what
our biggest existential threat is, it’s probably that. So we need to be
very careful.’
—
‘I think we need to be very careful in how we adopt
artificial intelligence and that we make sure that researchers don’t get
carried away,” he stated. “Sometimes what will happen is a scientist will get
so engrossed in their work that they don’t really realize the ramifications of
what they’re doing.’
—
Elon Musk
‘I think the
development of full artificial intelligence could spell the end of the human race.’
—
‘Once humans
develop artificial intelligence, it will take off on its own and redesign
itself at an ever-increasing rate.’
—
‘A super intelligent AI will be
extremely good at accomplishing its goals, and if those goals aren't aligned
with ours, we're in trouble.’
—
Stephen Hawking
“I don’t
think it’s very helpful for other people who are incredible in their domains
commenting on something they actually know very little about, but because they
are quite big celebrities now, more than just scientists or businessmen, it
gets picked up a lot,”
—
Hassabis, CEO of Deep Mind bought by Google in 2014
Hassabis, CEO
of Deep Mind says further that he has criticized the likes of Microsoft founder
Bill Gates and SpaceX and Tesla founder Elon Musk for their comments on AI, the
Times of London reported on Mar. 27, 2017.
The main comment was that they are not an AI expert and hence may not be
right. During an "ask me anything"
question and answer session on Reddit in Jan, 2015, Mr. Gates wrote: "I am
in the camp that is concerned about super intelligence. First the machines will
do a lot of jobs for us and not be super intelligent. That should be positive
if we manage it well [12].
AI is the
theory and development of computer systems that can perform tasks that take
human intelligence. Facebook uses AI for targeted
advertising, photo tagging and news feeds. Microsoft and Apple use AI to
power their virtual intelligent assistants’ viz. Cortana and SIRI respectively.
Google has always used it for its search engine.
Sci-fi novelist David Brin feels
that these dire predictions are often simplistic and unreasonable [23] (see
figure 1).
What is AI [7, 8]?
AI is embedded in Cortana and SIRI from Microsoft Corp. and Apple
Inc. respectively and AI is also in self-driving cars technology.
As per Wikipedia [7], Weak AI (also known as narrow AI) is focused on a narrow
task only (e.g. only playing chess, solving equations, facial recognition,
only internet searches or only driving a car or an automatic language
translator that converts only one language say English to Russian). Narrow AI is designed to analyze data and
form conclusions far more efficiently than humans can. It, presently, can outperform humans
at whatever its specific task. AI ultimately would outperform humans
at nearly every cognitive task
SIRI from Apple Inc. is a good example of narrow intelligence. SIRI operates
within a limited pre-defined range. It
has no genuine intelligence, no self-awareness and also no life despite this being
a sophisticated example of weak AI.
The purpose of general AI is to
mimic human behavior as much as possible. A Developer’s goal is to make a
system as life-like as possible [48]. Weak AI is defined in contrast to either strong
AI (a machine with consciousness, sentience and mind) or artificial general intelligence (a machine with the ability to apply intelligence to any
problem, rather than just one specific problem). All currently existing systems
considered based on AI of any sort are weak AI at most.
Any suitably
programmed computer with the right Input/ Output or I/O can simulate a mind in
exactly the same sense human beings have minds. According to Strong AI or
General AI, the correct simulation really is a mind. According to Weak AI, the
correct simulation is a model of the mind [11]. A strong AI automation or
machine will, therefore, be capable of doing all tasks instead of one as in
case of a weak AI machine. It will
outperform human mind in all cognition tasks or in other words, will be super
intelligent. Such an automation once accomplished may be a cause of risk to
human race in time to come.
While a sci-fi movie often depicts AI as robots with human-like
characteristics, AI is included in almost anything from Google’s search
algorithms to IBM’s Watson (can understand all forms of data, interact
naturally with people, and learn and reason, at scale [13].) to autonomous
weapons (US has put AI at the center of its defense strategy, with weapons that
can identify targets and make decisions [24, 27].) Such weapons in form of
drone - with six whirring rotors and a camera - are freely available at
Amazon. However, these are equipped with advanced AI software to transform them
into a weapon that could find and identify men carrying replicas of AK-47s around
say, a village and posing to be rioters but are activists or insurgents or
extremists. The autonomy may (or not) be
shrewdly compromised while taking an eliminate decision. The Pentagon’s latest budget is $18 billion
for the years 2016-18 on technologies that included those needed for autonomous
weapons. In fact, Big Data and AI combined will bring new research projects.
Searle [10] does not disagree that AI research can create
machines that are capable of super intelligent behavior. The Chinese room
argument literally says that a digital machine could be built that could behave in more intelligent manner than a human, but does not have a mind so it cannot behave in the same way as human brains do. Indeed, Searle
writes that "the Chinese room [9] argument ... assumes complete success on
the part of artificial intelligence in simulating human cognition."
AI vs. Machine Learning vs. Deep Learning [40]
·
AI: The
area includes automated “knowledge” tasks, Robotics, Internet of Things, 3D
Printing technology, and self-driving cars. It is expected that the total
commercial effect of these technologies will be USD 50 - 99.5 trillion by
2025.
·
Machine Intelligence (MI): Many
experts believe that Machine Intelligence and AI are interchangeable terms. “Machine Intelligence” is
generally used in Europe, while the term “AI” is used in the US. MI usually shows a contribution of a biological neuron in the research process with
deployment of advanced methods than used in Simple Neural Networks.
·
Machine Learning
(ML): It is inherent part of AI. Machine Learning refers to the
software research area that enables algorithms to improve through self-learning
from data without any human intervention i.e. unsupervised learning.
·
Deep Learning (DL): Deep
Learning is really a branch of ML. It pertains to the study of “deep neural
networks” in the human brain. Deep Learning tries to emulate the functions of
inner layers of the human brain, and its successful applications are found in
the areas such as image recognition, language translation, or email security.
The Deep Learning technology is modeled after the human brain.
Such
systems learn and improve performance over time, and adjust the weight of their
nodes and also data sets to remain efficient. To reduce requirement of
processing power most training features in commercial AI are set to very simple
levels only [48].
Computer
- Brain Interface [3]
Efforts to develop human-machine or Brain–Computer interface
are also on the way. Neuralink - a US startup company developing implantable human-computer
interfaces - was founded in 2016 by Elon Musk and first publicly reported on in March 2017. The U.S. Defense Department’s research arm,
DARPA, is also working on similar technologies. “We are giving our physiology
the opportunity to work with machines in a different way,” said Justin
Sanchez, director of DARPA’s Biological Technologies Office.
The effort is to merge human intelligence with
machines? This may also help to avoid an
AI doomsday scenario and hence Elon Musk’s interest in such a venture.
A quote on Brain–Computer interface follows:
‘Such a massive interconnection
will lead to the emergence of a new global consciousness, and a new organism I
call the Meta-Intelligence.’
—
Peter Diamandis
Progress in AI [14 - 17]
Neural Information Processing
System (NIPS) 2016 is an annual event. “Reinforcement
learning” (RL) [25] and “Generative Adversarial Networks” (GANs) are the in
thing in AI now. RL is the process to emulate the animal learning. The idea is
to interpret how certain behaviors tend to result in a positive or negative
outcome. Using this method, a automation can navigate a maze by trial and error and then
associate the positive outcome—exiting the maze—with the actions that led up to
it. This lets a machine learn without instruction or even explicit examples. As per definition, RL allows machines and
software agents to spontaneously fix the ideal behavior within a specific
context, in order to maximize its performance. Simple reward feedback is used
as reinforcement signal for the agent to learn its behavior.
The idea has been around for decades,
but combining it with large neural networks provides the power needed to make
it work on really complex problems (like the game of Go — originated in China more than 2,500
years ago). AlphaGo [16] with hit and
trial came out for itself how to play the game at an expert level.
Work on GANs was initiated by Ian Goodfellow [15] — a
system of network consisting of two independent networks. One network referred to as ‘D’ generates new data
after learning from a given (real) training set. Other Network referred to as ‘G’ tries to differentiate
between real and fake data. This approach could be used
to generate video-game scenery, de-blur
pixelated video footage, or apply stylistic changes to computer-generated
designs. In fact, both GANs and RLs
lead to improving performance of unsupervised machines (neural networks [17, 26
and 29]).
Goodfellow [15] had shown that
network ‘G’ performs
a form of unsupervised learning on
the original dataset. Further, Yann
LeCun [31] - Director of AI Research, Facebook
and Founding Director of the NYU
Center for Data Science - stated, unsupervised
learning is the “cake” of true AI. This
powerful technique (called GAN) is claimed to be easily programmed using PyTorch [30], in under 50 lines of code.
The
other area of research in AI is language is to parse and generate language
effectively. This is a long-standing
goal in artificial intelligence, and the prospect of computers communicating and
interacting with us using language is interesting. Better language
understanding would make machines a whole lot more useful. Natural Language
Processing (NLP) [22] solutions can parse regulatory text and pattern match
with a cluster of keywords to identify the changes relevant to particular
organization [20].
Contextual
analysis of vast volumes of conversations from phone recordings, chats and
emails conversations can help identify potential market manipulation and
collusion activities instead of traditional text and voice processing
techniques. Similarly, deep learning
techniques on the transactions or the business rules can become more refined
and significantly reduce activities flagged for investigation instead of using
traditional methods.
Don’t
expect to get into deep and meaningful conversation with your smartphone as some remarkable
changes can be expected in 2017 due to further advances.
Perils of AI [19, 20]
Some
of perils of using AI to solve problems are well known, e.g. herding
behavior, out-of-sample extrapolation, and false correlations, to
name a few. Besides the above, some additional risks do occur as compliance
teams become more technology driven and become reliant on AI based machines to
do their jobs.
In 2013, a single fake tweet from a verified account about an explosion in the White House briefly wiped out about $140 billion in US market value. Do we have a full proof AI solution yet? The answer to this question is probably no even today but it may become yes tomorrow.
In 2013, a single fake tweet from a verified account about an explosion in the White House briefly wiped out about $140 billion in US market value. Do we have a full proof AI solution yet? The answer to this question is probably no even today but it may become yes tomorrow.
AI may
learn gender bias discrimination through historical data, which can often be twisted
for decision making. Take for instance word embedding algorithms in NLP [22].
These algorithms do carry historical categorizes (sexism for example) into the
future simply by learning patterns regarding such words that often appear
together in historical data. We can automate decision making process but we
need to ensure that it may not convey false level of confidence in people in
machine performance, e.g., an NLP solution that tracks thousands of regulatory
changes.
Similarly,
AI also creates a false level of confidence amongst regulators in establishing
whether due process is being followed for the different business practices such
as sanctions screening or suspicious activity reporting.
Mercedes-Benz’s
self-driving car program assigns passenger safety over pedestrian
safety by design. It’s a question of Ethics. However, such risks can undo the whole
institutions performance, wiping out the edge on its revenue generation side. Companies
like Microsoft, Amazon and Facebook have acknowledged these risks and use
ethical best practices for AI development. The volume of regulatory changes,
increased difficulties posed by of cheats as well as enhanced scrutiny from
regulators will predictably make compliance teams turn to AI for solution.
The
other risk of AI super intelligence is to survival of humanity as once we have
cognitive power of brain and emotions imparted to machines (human like robots
as often depicted in movies) then there is a risk of takeover of humanity by
machines. Such risks have been talked by celebrities and experts as also
mentioned in prelude to this article.
It
is expected that advancement in robotics and use of AI [21] could affect nearly
30 percent of U.K. jobs by the 2030s, compared to 38 percent in
the U.S., 35 percent in Germany and 21 percent in Japan. Robots are likely to take lot
of existing jobs in near future.
Bill Gates says that Machines should also get
taxed just like Human Labor. Jobs where you've got more of a
human touch, like health and education would be safer but automating more
manual and repetitive tasks will eliminate some existing jobs, but could also
enable some workers to focus on higher value, more rewarding and creative work,
removing the monotony from our day jobs.
How AI will shape in future [5]
The future use of AI is in the fields like healthcare, smartphone assistants, and robotics. Deepmind is in
partnership with the NHS where machine learning is to be used in healthcare. IBM,
Watson is in AI, where expert system which can help in medical diagnosis of
images and helping people have healthier lifestyles, are developed. However, for NHS the first stage is to
develop helpful tools for visualizations and basic stats on all sorts of twenty
first century platforms mainly smart phones and tabs.
Deepmind will make smartphone assistant smart and context
sensitive and have a deeper appreciation of what owner of phone is trying to
do. Presently, most of these systems are extremely weak as they are mainly
template based. Once you go off the
templates given in the program then these systems prove pretty useless.
Robotics is used mainly in two ways. You have companies like Fanuc [6]
making industrial robots that do incredible sounding things for a very specific
purpose and these tasks do not need intelligence, and then one has these
concierge-style robots like Softbank’s Pepper and so on with some intelligence
and are basically similar
to smartphone assistants. In summary,
you have robots which only use pre-programmed templates and other type which
are little intelligent.
Further, the
concierge robots may also be pre-programmed with template responses and it is
easy to confuse them. A lot of research
is being done in the area of machine learning [18] to inculcate cognition in
the machines.
In the month of Feb.
2017 [38] Google released, “Tensorflow 1.0” – an open source library for machine
learning or AI – which is a collection of latest AI set of tools. This library, as per Google, is production-ready
by way of its application programing interface (API). Artificial neural
networks (ANN) [26] with a few enhancements are called Deep Neural Network
(DNN). DNN is an ANN with multiple hidden
layers of units between the input and output layers [44]. ANN and DNN can be
trained on data. Tensorflow includes tools for computing K-means and support
vector machines (SVMs). Deep Learning [39] involves a cascade of many layers
of nonlinear processing units for feature extraction and
transformation. Each successive layer uses the output from the previous layer
as input. The algorithms may be supervised or unsupervised and
its applications include pattern analysis (unsupervised) and classification
(supervised). Convolutional neural networks (CNNs)
have now become the de facto method for visual and other two-dimensional data. A
CNN is composed of one or more convolutional layers with fully
connected layers (matching those in typical ANNs) on top. These can also be trained with standard back propagation
techniques. CNNs are easier to train than other regular, deep, feed-forward
neural networks and have many fewer parameters to estimate, making them a
highly attractive architecture to use.
What is Hadoop? As per
definition of Whatis.com [47], “It is an open source, Java-based programming
framework that supports the processing and storage of extremely large data sets
in a distributed computing environment. It is part of the Apache project
sponsored by the Apache Software Foundation.” Forrester Research predicted that
the Hadoop revolution (very useful once upon a time and a de facto platform for
handling Big Data) will be taken over by the AI revolution by declaring this in
a March 2017 report that “artificial intelligence will eclipse Hadoop
[46].” Forrester argues that “Hadoop may have sparked a revolution in
analytics, but the most recent revolution is the deep learning.” First hit to Hadoop came from companies
offering cloud services and giving modern management services in their platform.
One of the authors of the Forrester report, Brian Hopkins, has already answered
this claim:
“Just like Hadoop
wasn’t designed for the cloud, it wasn’t designed to do the matrix math that
deep learning requires. And the cloud crew is busy creating specialized
AI-friendly environments, which means Hadoop vendors have even more work to do
to keep their software relevant. Will they make Hadoop a platform for AI?” The
answer is probably negative.
Way forward
AI in the area of Marketing will improve mobile search with relevant results to help marketers in
better conversions justifying higher ad rates. With the help of AI search
engines are likely to bring better search results that are custom-made for a
specific point in time – say, a micro-moment when users
might be vulnerable and buy if certain set of results are delivered by a search.
One may expect to see better language understanding and an AI
boom in China, among other things [14].
The sole idea is to enhance productivity and maintain quality thus save
billions of dollars.
AI
will be very useful in processing of large text say coming via e-mails or voice
recorded data due to progress in NLP area.
AI's
scientific challenge [28] comprises of providing new heuristic based computational
models that accommodate the wide range of capabilities attributed to human
brain and considered useful even for nonhuman intelligence. Common logical
keystones help in understanding theories of knowledge representation, planning,
problem solving, reasoning, and some aspects of NLP whereas economic aspects and
the mathematics of Markov decision processes help unify probabilistic forecasting,
fault diagnosis and repair, reinforcement learning, robot control, and many aspects
of speech recognition and image processing. Many of these include cross-disciplinary
boundaries and lead to integration with other fields. AI has also included logic,
philosophy, psychology, and linguistics for some time in its domain. Inclusion
of economics, decision theory, control theory, and operations research has
served as a focus for more recent efforts. Next area is integration of big data
results with AI and to see and analyze how these systems can derive new
heuristics from large amount of data processed using big data methods instead
of heavy mathematical time-consuming computations. AI may be useful even in handling big Data
[32].
Experts
believe that AI, Big Data [27, 36], Cloud Computing [33], InfoGraphic [34], Internet of Things [27, 35], will continue to trend in
2017, building on Big Data as data is an essential building block of AI. One may also see integration of data from
variety of apps in many fields including machine translation and image
recognition.
The IBM supercomputer Watson [41] has exploited power of science,
technology and culture to develop many AI based systems. 34 Insurance
employees in Japan were made redundant to make way for AI based system. This
trend is likely to continue and may go viral in certain industries over the
coming years. And as IBM Watson's supercomputer gets smarter day by day with
more and more experience, soon a time will come when the robots will no more
think like humans, but the humans will need to learn to think like robots.
Besides threat to several types of mundane and routine jobs, Watson has
made some incredible breakthroughs in areas such as reducing threats of
cybercrimes, automated diagnostic or doctor and automated
lawyer: Ross Intelligence and IBM Watson joined hands to tackle legal matters [42].
Other apps of AI with high potential for replacement will be in areas such as
replacing teachers, auto-grading student papers,
for publishing industry: automatically writing high-quality articles in a short amount of time; automated
chefs: making new and tasty recipes and replacing expansive chefs in
restaurants; and automated policemen or soldiers: algorithms will decide who to
kill or who to arrest so this may take time although drones are replacing
soldiers in a number of wars [45]. All of this can be achieved by using latest advances
in many areas of science, technology and culture. It also establishes the fact that AI and
its capabilities are going to improve day by day and thus become very useful to
the humans.
Lawyers can do research on The ROSS Legal App or
automated Lawyer system by asking questions in natural language, just as they would consult their colleague.
Since this app is built upon a cognitive computing system, therefore, it is
able to scan over a billion text documents in less than a second and return the
exact passage the user needs. Thus, there is no need of endless Internet browsing
and database searches to get to desired results.
A future challenge will be that AI based systems will be deployed in many
new areas and this may also be a cause to bring unemployment. But it may also
create jobs with new specs. The quantum
of new jobs created may be small. So
while many of these research areas in and around AI, Data Science, ML, MI, and
Deep Learning show a lot of potential, however, the execution and fine-tuning route
will add main risks if deployed hastily and without suitable preparation.
Can AI writes AI
algorithms, or in short, can AI automate AI [45]? Answer to this question will be big yes in
time to come.
Can one inculcate emotions in a machine or automation
or robot [37? How would you feel about getting therapy from a robot [43]?
Emotionally intelligent machines may not be as far away as it seems. Over the
last few decades, AI has improved a lot in understanding emotional responses in
humans. It’s possible that AI may one day matches humans in recognizing
different types of emotions. But whether an AI based machine could ever
experience emotions? Further can
one give a command through a computer to masses via computer-brain interface?
If answer to any of two questions above is in affirmative then the problem of
takeover by machines may not stay imaginary.
References
[1] Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial
Intelligence
[2] What
is Artificial Intelligence? Stop worrying about AI destroying humans, expert
says
[3] Elon
Musk just confirmed that he’s making Tech to merge human brains and computers
[4]
Here’s what Elon Musk thinks about Brain Implants, AI, And Basic Income
[5] DeepMind founder Demis Hassabis on how AI will shape the future
[6]
Fanuc
[7] Weak
AI
https://en.wikipedia.org/wiki/Weak_AI
[8]
Benefits & Risks of AI
[9]
Chinese Room # strong AI
[10] Searle, John (2009), "Chinese room argument", Scholarpedia, 4 (8):
3100, doi:10.4249/scholarpedia.3100
[11] Searle, John (1 November
2004), Mind: a brief introduction, Oxford
University Press, Inc., ISBN 978-0-19-515733-8
[12] Microsoft's Bill Gates insists AI is a threat
[13] Go beyond artificial intelligence with Watson
[14] MIT Technology Review: 5 Big Predictions for Artificial
Intelligence in 2017
[15] Generative
Adversarial Networks – Hot Topic in Machine Learning
[16] Explore the AlphaGo Games
[17] Neural Networks
[18] Machine Learning
[19]
The Promise and Potential Peril of AI
[20]
The Promise and Perils of AI in Compliance
[21] Will Robots Take Over?
Artificial Intelligence To Affect UK Workers Soon
[22] Natural language processing
(NLP)
[23]
Are We Overthinking the Dangers of
Artificial Intelligence?
[24] The Pentagon’s ‘Terminator Conundrum’: Robots
That Could Kill on Their Own
[25] Reinforcement Learning
http://reinforcementlearning.ai-depot.com/
[26] Invited Chapter 6 - Evolutionary
Algorithms and Neural Networks, Pages
111-136, R.G.S. Asthana in book, Soft Computing and Intelligent
Systems (Theory and Applications), Academic Press Series in Engineering, Edited
by:Naresh K. Sinha, Madan M. Gupta and Lotfi A. Zadeh ISBN:
978-0-12-646490-0
[27] Future 2030 by
Dr. RGS Asthana, Senior Member IEEE
[28] Strategic
Directions in Artificial Intelligence
[29] Generative Adversarial
Networks (GANs) in 50 lines of code (PyTorch)
[30] Pytorch
[31] Yann LeCun
[32] Big Data Supporting Deep Learning, AI and More in 2017
[34]
InfoGraphic
[35] Internet
of Things
[36] Big
Data
[37] Artificial Intelligence, Cognitive Systems
and the Learning Brain
[38]
Google releases TensorFlow 1.0 with new machine learning tools
[39] Deep Learning
[40] A
Comparative Roundup: Artificial Intelligence vs. Machine Learning vs. Deep
Learning
[41]
Wired: IBM Watson AI
[42] ROSS
and Watson tackle the law
[43] Will
AI ever understand human emotions?
[44] What
is the difference between a neural network and a deep neural network?
[45] Data
Science Central: Which jobs will AI (Artificial Intelligence) kill?
[46] AI Will Eclipse Hadoop, Says Forrester,
So Cloudera Files For IPO As A Machine Learning Platform
[47] Hadoop
[48] Demystifying
AI: Here’s everything you need to know about AI
No comments:
Post a Comment