{"id":3417,"date":"2019-04-25T12:24:30","date_gmt":"2019-04-25T03:24:30","guid":{"rendered":"http:\/\/163.180.4.222\/lab\/?p=3417"},"modified":"2019-04-25T12:24:30","modified_gmt":"2019-04-25T03:24:30","slug":"brain-signals-translated-into-speech-using-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/biochemistry.khu.ac.kr\/lab\/?p=3417","title":{"rendered":"Brain signals translated into speech using artificial intelligence"},"content":{"rendered":"<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h5>Technology could one day be used to help people who can\u2019t talk to communicate.<\/h5>\n<p>&nbsp;<\/p>\n<div class=\"article__body serif cleared\">\n<figure class=\"figure\">\n<div class=\"embed intensity--high\">\n<div class=\"embed intensity--high\"><img decoding=\"async\" class=\"figure__image\" src=\"https:\/\/media.nature.com\/w800\/magazine-assets\/d41586-019-01328-x\/d41586-019-01328-x_16672812.jpg\" alt=\"Masahiro Fujita in a hospital bed, with a breathing tube. In front of him is a monitor plastered with Japanese signs.\" data-src=\"\/\/media.nature.com\/w800\/magazine-assets\/d41586-019-01328-x\/d41586-019-01328-x_16672812.jpg\" \/><\/div>\n<\/div><figcaption>\n<p class=\"figure__caption sans-serif\"><span class=\"mr10\">People with paralysing conditions such as motor-neuron disease often rely on technology to help them speak.<\/span>Credit: BJ Warnick\/Alamy<\/p>\n<\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>In an effort to provide a voice for people who can\u2019t speak, neuroscientists have designed a device that can transform brain signals into speech.<\/p>\n<p>This technology isn\u2019t yet accurate enough for use outside the lab, although it can synthesize whole sentences that are mostly intelligible. Its creators described their speech-decoding device in a study<sup><a href=\"https:\/\/www.nature.com\/articles\/d41586-019-01328-x?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+nature%2Frss%2Fcurrent+%28Nature+-+Issue%29#ref-CR1\">1<\/a><\/sup>published on 24 April in\u00a0<i>Nature<\/i>.<\/p>\n<p>Scientists have previously used artificial intelligence to translate single words<sup><a href=\"https:\/\/www.nature.com\/articles\/d41586-019-01328-x?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+nature%2Frss%2Fcurrent+%28Nature+-+Issue%29#ref-CR2\">2<\/a><\/sup><sup>,<\/sup><sup><a href=\"https:\/\/www.nature.com\/articles\/d41586-019-01328-x?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+nature%2Frss%2Fcurrent+%28Nature+-+Issue%29#ref-CR3\">3<\/a><\/sup>, mostly consisting of one syllable, from brain activity, says Chethan Pandarinath, a neuroengineer at Emory University in Atlanta, Georgia, who co-wrote a\u00a0<a href=\"https:\/\/preview-www.nature.com\/platform\/rh\/preview\/page\/nature\/brain-implants-that-let-you-speak-your-mind\/16645644?view=fragmentPreview\" data-track=\"click\" data-label=\"https:\/\/preview-www.nature.com\/platform\/rh\/preview\/page\/nature\/brain-implants-that-let-you-speak-your-mind\/16645644?view=fragmentPreview\" data-track-category=\"body text link\">commentary<\/a>\u00a0accompanying the study. \u201cMaking the leap from single syllables to sentences is technically quite challenging and is one of the things that makes the current work so impressive,\u201d he says.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Mapping movements<\/strong><\/p>\n<p>Many people who have lost the ability to speak communicate using technology that requires them to make tiny movements to control a cursor that selects letters or words on a screen. UK physicist Stephen Hawking, who had motor-neuron disease, was one famous example. He used a speech-generating device activated by a muscle in his cheek, says study leader Edward Chang, a neurosurgeon at the University of California, San Francisco.<\/p>\n<p>Because people who use such devices must type out words letter by letter, these devices can be very slow, producing up to ten words per minute, Chang says. Natural spoken speech averages 150 words per minute. \u201cIt\u2019s the efficiency of the vocal tract that allows us to do that,\u201d he says. And so Chang and his team decided to model the vocal system when constructing their decoder.<\/p>\n<p>&nbsp;<\/p>\n<figure class=\"figure\">\n<div class=\"embed intensity--high\">\n<div class=\"embed intensity--high\"><img decoding=\"async\" class=\"figure__image\" src=\"https:\/\/media.nature.com\/w800\/magazine-assets\/d41586-019-01328-x\/d41586-019-01328-x_16672810.jpg\" alt=\"A hand holding an array of intracranial electrodes: a palm-sized sheet of plastic studded with metal, attached to wires.\" data-src=\"\/\/media.nature.com\/w800\/magazine-assets\/d41586-019-01328-x\/d41586-019-01328-x_16672810.jpg\" \/><\/div>\n<\/div><figcaption>\n<p class=\"figure__caption sans-serif\"><span class=\"mr10\">Researchers implanted electrodes similar to these in participants\u2019 skulls to record their brain signals.<\/span>Credit: UCSF<\/p>\n<\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>The researchers worked with five people who had electrodes implanted on the surface of their brains as part of epilepsy treatment. First, the team recorded brain activity as the participants read hundreds of sentences aloud. Then, Chang and his colleagues combined these recordings with data from previous experiments that determined how movements of the tongue, lips, jaw and larynx created sound.<\/p>\n<p>The team trained a deep-learning algorithm on these data, and then incorporated the program into their decoder. The device transforms brain signals into estimated movements of the vocal tract, and turns these movements into synthetic speech. People who listened to 101 synthesized sentences could understand 70% of the words on average, Chang says.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<div class=\"c-podcast sans-serif\">\n<div class=\"embed intensity--high\">\n<div class=\"c-podcast__container\">\n<p class=\"c-podcast__audio-caption\">Two examples of a participant reading a sentence, followed by the synthesized version of the sentence generated from their brain activity.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"c-podcast__download-transcript text14\"><a class=\"c-podcast__download\" href=\"https:\/\/www.nature.com\/magazine-assets\/d41586-019-01328-x\/d41586-019-01328-x_16672814.wav\" download=\"nature-24 April 2019-MO0\" data-track=\"click\" data-track-category=\"magazine podcast\" data-track-action=\"download\" data-track-label=\"MP3 download\">Download MP3<\/a><\/p>\n<div class=\"c-podcast__transcript\"><\/div>\n<div class=\"c-podcast__copyright text-gray-light\">Credit: Chang lab, UCSF Dept. of Neurosurgery<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>In another experiment, the researchers asked one participant to read sentences aloud and then to mime the same sentences by moving their mouth without producing sound. The sentences synthesized in this test were of lower quality than those created from audible speech, Chang says, but the results are still encouraging.<\/p>\n<p><strong>Intelligible future<\/strong><\/p>\n<p>Speech created by mapping brain activity to movements of the vocal tract and translating them to sound is more easily understood than that produced by mapping brain activity directly to sound, says Stephanie Ri\u00e8s, a neuroscientist at San Diego State University in California.<\/p>\n<p>But it\u2019s unclear whether the new speech decoder would work with words that people only think, says Amy Orsborn, a neural engineer at the University of Washington in Seattle. \u201cThe paper does a really good job of showing that this works for mimed speech,\u201d she says. \u201cBut how would this work when someone\u2019s not moving their mouth?\u201d<\/p>\n<p>Marc Slutzky, a neurologist at Northwestern University in Chicago, Illinois, agrees and says that the decoder\u2019s performance leaves room for improvement. He notes that listeners identified the synthesized speech by selecting words from a set of choices; as the number of choices increased, people had more trouble understanding the words.<\/p>\n<p>The study \u201cis a really important step, but there\u2019s still a long way to go before synthesized speech is easily intelligible\u201d, Slutzky says.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"emphasis\">doi: 10.1038\/d41586-019-01328-x<\/div>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>(\uc6d0\ubb38: <a href=\"https:\/\/www.nature.com\/articles\/d41586-019-01328-x?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+nature%2Frss%2Fcurrent+%28Nature+-+Issue%29\">\uc5ec\uae30<\/a>\ub97c \ud074\ub9ad\ud558\uc138\uc694~)<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; &nbsp; Technology could one day be used to help people who can\u2019t talk to communicate. &nbsp; People with paralysing conditions such as motor-neuron disease<a href=\"https:\/\/biochemistry.khu.ac.kr\/lab\/?p=3417\" class=\"more-link\">(more&#8230;)<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[33,35,29,30],"tags":[],"class_list":["post-3417","post","type-post","status-publish","format-standard","hentry","category-do-biology","category-lets-do-computer-science","category-lets-do-science","category-recent-science-news"],"aioseo_notices":[],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":3419,"url":"https:\/\/biochemistry.khu.ac.kr\/lab\/?p=3419","url_meta":{"origin":3417,"position":0},"title":"Brain implants that let you speak your mind &#038; \uc774 \uc7a5\uce58 \uc77c\ucc0d \ub098\uc654\ub2e4\uba74\u2026 \ud638\ud0b9 \ubc15\uc0ac\uc758 \uc601\uad6d\uc2dd \uc5b5\uc591\ub3c4 \ub4e4\uc5c8\uc744 \ud150\ub370","author":"biochemistry","date":"April 25, 2019","format":false,"excerpt":"\u00a0 \u00a0 A brain\u2013computer interface device synthesizes speech using the neural signals that control lip, tongue, larynx and jaw movements, and could be a stepping stone to restoring speech function in individuals unable to speak. \u00a0 Speaking might seem an effortless activity, but it is one of the most complex\u2026","rel":"","context":"In &quot;Let's Do Biology!&quot;","block_context":{"text":"Let's Do Biology!","link":"https:\/\/biochemistry.khu.ac.kr\/lab\/?cat=33"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":3937,"url":"https:\/\/biochemistry.khu.ac.kr\/lab\/?p=3937","url_meta":{"origin":3417,"position":1},"title":"The ethics of brain\u2013computer interfaces","author":"biochemistry","date":"July 27, 2019","format":false,"excerpt":"\u00a0 \u00a0 As technologies that integrate the brain with computers become more complex, so too do the ethical issues that surround their use. \u00a0 \u00a0 A helmet containing a brain\u2013computer interface that enables the wearer to select symbols on a screen using brain activity.Credit: Jean-Pierre Clatot\/AFP\/Getty \u00a0 \u00a0 \u201cIt becomes\u2026","rel":"","context":"In &quot;Let's Do Biology!&quot;","block_context":{"text":"Let's Do Biology!","link":"https:\/\/biochemistry.khu.ac.kr\/lab\/?cat=33"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":2668,"url":"https:\/\/biochemistry.khu.ac.kr\/lab\/?p=2668","url_meta":{"origin":3417,"position":2},"title":"Using neuroscience to develop artificial intelligence","author":"biochemistry","date":"February 15, 2019","format":false,"excerpt":"\u00a0 \u00a0 When the mathematician Alan Turing posed the question \u201cCan machines think?\u201d in the first line of his seminal 1950 paper that ushered in the quest for artificial intelligence (AI) (1), the only known systems carrying out complex computations were biological nervous systems. It is not surprising, therefore, that\u2026","rel":"","context":"In &quot;Essays on Science&quot;","block_context":{"text":"Essays on Science","link":"https:\/\/biochemistry.khu.ac.kr\/lab\/?cat=32"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":4788,"url":"https:\/\/biochemistry.khu.ac.kr\/lab\/?p=4788","url_meta":{"origin":3417,"position":3},"title":"Next-generation artificial vision comes into view","author":"biochemistry","date":"November 8, 2019","format":false,"excerpt":"\u00a0 \u00a0 A grid of photodiodes as wide as a sesame seed rests in the eye of a person with macular degeneration. PHOTO: PIXIUM VISION SA\/PARIS \u00a0 \u00a0 In 2014, U.S. regulators approved a futuristic treatment for blindness. The device, called Argus II, sends signals from a glasses-mounted camera to\u2026","rel":"","context":"In &quot;'06. \uc5d0\ub108\uc9c0\uc640 \uc5d4\ud2b8\ub85c\ud53c'\uc640 '07. \uacfc\ud559\uacfc \ubb38\uba85' \uad00\ub828&quot;","block_context":{"text":"'06. \uc5d0\ub108\uc9c0\uc640 \uc5d4\ud2b8\ub85c\ud53c'\uc640 '07. \uacfc\ud559\uacfc \ubb38\uba85' \uad00\ub828","link":"https:\/\/biochemistry.khu.ac.kr\/lab\/?cat=42"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1438,"url":"https:\/\/biochemistry.khu.ac.kr\/lab\/?p=1438","url_meta":{"origin":3417,"position":4},"title":"Side effects of addiction treatment","author":"biochemistry","date":"August 24, 2018","format":false,"excerpt":"\u00a0 \u00a0 (\uc6d0\ubb38: \uc5ec\uae30\ub97c \ud074\ub9ad\ud558\uc138\uc694~) \u00a0 \u00a0 Science\u00a0\u00a024 Aug 2018: Vol. 361, Issue 6404, pp. 761 DOI: 10.1126\/science.aau6548 \u00a0 \u00a0 \u00a0 Antiaddiction drugs could help curtail the opioid epidemic, but they may pose risks of their own. \u00a0 \u00a0 \u00a0 Drug addiction is a major global health issue, and the\u2026","rel":"","context":"In &quot;Let's Do Biology!&quot;","block_context":{"text":"Let's Do Biology!","link":"https:\/\/biochemistry.khu.ac.kr\/lab\/?cat=33"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":2584,"url":"https:\/\/biochemistry.khu.ac.kr\/lab\/?p=2584","url_meta":{"origin":3417,"position":5},"title":"Pioneering brain study reveals \u2018software\u2019 differences between humans and monkeys","author":"biochemistry","date":"January 29, 2019","format":false,"excerpt":"\u00a0 \u00a0 Neuroscientists tracked the activity of single neurons deep in the brain and suggest the findings could explain humans\u2019 intelligence \u2014 and susceptibility to psychiatric disorders. \u00a0 People with epilepsy undergoing certain treatments often also participate in neuroscience studies.Credit: BSIP\/UIG via Getty \u00a0 \u00a0 Neuroscientists have for the first\u2026","rel":"","context":"In &quot;Let's Do Biology!&quot;","block_context":{"text":"Let's Do Biology!","link":"https:\/\/biochemistry.khu.ac.kr\/lab\/?cat=33"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"jetpack_sharing_enabled":false,"jetpack_shortlink":"https:\/\/wp.me\/p9Xo1j-T7","_links":{"self":[{"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=\/wp\/v2\/posts\/3417","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3417"}],"version-history":[{"count":1,"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=\/wp\/v2\/posts\/3417\/revisions"}],"predecessor-version":[{"id":3418,"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=\/wp\/v2\/posts\/3417\/revisions\/3418"}],"wp:attachment":[{"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3417"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3417"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/biochemistry.khu.ac.kr\/lab\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3417"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}