{"id":861,"date":"2017-09-29T12:09:17","date_gmt":"2017-09-29T09:09:17","guid":{"rendered":"http:\/\/esanu.name\/vitalie\/?p=861"},"modified":"2017-09-29T12:09:17","modified_gmt":"2017-09-29T09:09:17","slug":"ai-diacritice","status":"publish","type":"post","link":"http:\/\/esanu.name\/vitalie\/?p=861","title":{"rendered":"AI Diacritice"},"content":{"rendered":"<blockquote><p>Orice decizie care face o persoan\u0103 normal\u0103 \u00een &#8211; Andrew Ng<\/p><\/blockquote>\n<p>Recent experimentam cu <a title=\"Inteligen\u021b\u0103 artificial\u0103\" href=\"https:\/\/ro.wikipedia.org\/wiki\/Inteligen\u021b\u0103_artificial\u0103\" target=\"_blank\" rel=\"noopener\">Inteligen\u021ba Artificial\u0103<\/a>, mai exact cu <a title=\"Machine Learning\" href=\"https:\/\/en.wikipedia.org\/wiki\/Machine_learning\" target=\"_blank\" rel=\"noopener\">Machine Learning<\/a>, mai exact cu <a title=\"Deep Learning\" href=\"https:\/\/en.wikipedia.org\/wiki\/Deep_learning\" target=\"_blank\" rel=\"noopener\">Deep Learning<\/a>\u2026 pentru geeks\u2026 m\u0103 jucam cu <a title=\"Long Short Term Memory\" href=\"https:\/\/en.wikipedia.org\/wiki\/Long_short-term_memory\" target=\"_blank\" rel=\"noopener\">LSTM<\/a>.<\/p>\n<p>Mi-a venit un g\u00e2nd s\u0103 \u00eencerc s\u0103 corectez cu ajutorul LSTM <a title=\"Diacritici Limba Rom\u00e2n\u0103\" href=\"https:\/\/ro.wikipedia.org\/wiki\/Wikipedia:Diacritice\" target=\"_blank\" rel=\"noopener\">diacriticele \u00een limba rom\u00e2n\u0103<\/a>.<\/p>\n<p>Am cump\u0103rat cel mai puternic <a title=\"Graphics Processing Unit\" href=\"https:\/\/en.wikipedia.org\/wiki\/Graphics_processing_unit\" target=\"_blank\" rel=\"noopener\">GPU<\/a> g\u0103sit prin \u021bar\u0103, un <a title=\"NVIDIA Geforce gtx 1080 TI\" href=\"https:\/\/www.zap.md\/laptop-pc\/placi-video\/zotac-geforce-gtx-1080-ti-amp-edition\/lp14-1001\" target=\"_blank\" rel=\"noopener\">NVIDIA Geforce GTX 1080 TI<\/a>. F\u0103r\u0103 GPU \u00een Deep Learning nu faci nimic. Cumperi c\u00e2t te \u021bine buzunarul, altfel pierzi timpul. Am construit un model AI, am colectat texte \u00een limba rom\u00e2n\u0103 de pe Internet. I-am dat GPU-ului s\u0103 \u201drugume\u201d acest text \u0219i rezultatul l-am pus pe server online.<\/p>\n<p>\u021ain s\u0103 men\u021bionez c\u0103 modelul AI nu are idee de limba rom\u00e2n\u0103, cuvinte \u00een limba rom\u00e2n\u0103 sau reguli de care s\u0103 se conduc\u0103. Simplu, \u00eei dai c\u00e2t mai mult text \u0219i \u00eel la\u0219i singur s\u0103 se descurce. Serverul a muncit din greu c\u00e2teva zile p\u00e2n\u0103 am primit precizia uimitoare de 99.97%.<\/p>\n<p><strong>Boom!<\/strong> AI-ul a \u00eenv\u0103\u021bat s\u0103 pun\u0103 diacriticele \u00een dependen\u021b\u0103 de context. \u00cencerca\u021bi: &#8220;Langa casa mea nu creste iarba. Langa casa creste un copac.&#8221; \ud83d\ude09<\/p>\n<p>Vede\u021bi ce mi-a reu\u0219it pe\u00a0<a title=\"AI Diacritice\" href=\"https:\/\/diacritice.ai\" target=\"_blank\" rel=\"noopener\">https:\/\/diacritice.ai<\/a> sau s\u0103 v\u0103 instala\u021bi <a href=\"https:\/\/chrome.google.com\/webstore\/detail\/diacriticeai\/gedahnddbefjjlpbhghpjnddbkleambb\" target=\"_blank\" rel=\"noopener\">extensiunea Chrome<\/a>.<\/p>\n<p><em>Pentru curio\u0219i urmeaz\u0103 detalii.<\/em><\/p>\n<p>Machine Learning deschide noi perspective \u00een rezolvarea problemelor care p\u00e2n\u0103 acum nu se puteau rezolva pe cale algoritmic\u0103. Diacriticele sunt un exemplu.<\/p>\n<p>Nu e u\u0219or s\u0103 construie\u0219ti un algoritm care s\u0103 corecteze diacriticele pentru c\u0103 unele din ele reies din context. Exemplu: \u201cL\u00e2ng\u0103 casa mea cre\u0219te un copac. L\u00e2ng\u0103 cas\u0103 nu este nimeni\u201d. \u00cen acela\u0219i cuv\u00e2nt casa \u0219i cas\u0103 se pune sau nu diacritic \u00een dependen\u021b\u0103 de context.<\/p>\n<p>Pentru a rezolva astfel de probleme \u00ee\u021bi trebuie mult\u0103 informa\u021bie. Eu am colectat tocmai 7.3 GB de texte \u00een limba rom\u00e2n\u0103 scrise cu diacritice.<\/p>\n<p>Apoi \u00ee\u021bi trebuie putere de calcul. \u00cen cazul meu, am cump\u0103rat NVIDIA Geforce GTX 1080 TI, una din cele mai puternice cartele grafice g\u0103site \u00een Moldova. Antrenarea re\u021belei neuronale se face pe cartele grafice cu cele 3584 de CUDA procesoare care lucreaz\u0103 \u00een paralel. Anume aceast\u0103 tranzi\u021bie de la procesor la cartele grafice a dat un imbold mare dezvolt\u0103rii acestui domeniu.<\/p>\n<p>Ne mai trebuie \u0219i un framework pentru a crea re\u021beaua neuronal\u0103. Eu am ales <a title=\"Keras\" href=\"https:\/\/keras.io\" target=\"_blank\" rel=\"noopener\">Keras<\/a> pentru simplitate \u0219i <a title=\"Tensorflow\" href=\"https:\/\/www.tensorflow.org\" target=\"_blank\" rel=\"noopener\">Tensorflow<\/a> pentru execu\u021bie.<\/p>\n<p>Ideea este urm\u0103toarea. S\u0103 c\u0103ut\u0103m \u00een text literele <strong>a, i, s, t<\/strong> \u0219i apoi analiz\u0103m 30 de litere \u00een dreapta \u0219i 30 din st\u00e2nga \u0219i le transmitem la intrare \u00een re\u021beaua neuronal\u0103, iar la ie\u0219ire \u00eei spunem c\u0103 trebuie s\u0103 avem <strong>a, i, s, t, \u0103, \u00e2, \u00ee, \u0219, \u021b<\/strong>. De ce 30 din st\u00e2nga \u0219i din dreapta? Am \u00eencercat \u0219i c\u00e2te 15 caractere \u0219i c\u00e2te 20, \u00eens\u0103 cu 30 am ajuns la un rezultat bun. Posibil c\u0103 exist\u0103 \u0219i alte lungimi mai bune, r\u0103m\u00e2ne pe viitor s\u0103 mai \u00eencerc.<\/p>\n<p>Deep learning se reduce p\u00e2n\u0103 la urm\u0103 la \u00eenmul\u021biri de matrici (aka tensori). <code>X*W + b = Y<\/code>. Unde X este matricea de intrare (textul nostru codat \u00eentr-o anumit\u0103 form\u0103), Y este rezultatul \u00een baza c\u0103ruia se \u00eenva\u021b\u0103, \u00een cazul nostru e un vector cu 3 elemente c\u0103ruia \u00eei indic\u0103m dac\u0103 este simbolul diacritic sau nu.<\/p>\n<p>Cum transform\u0103m textul \u00een matrice? \u00cei atribuim fiec\u0103rei litere din alfabetul rom\u00e2n o pozi\u021bie \u00een spa\u021biul creat de noi. Pentru simplitate tot textul este convertat la minuscule.<\/p>\n<p><code>alphabet = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r',<br \/>\n's','t','u','v','w','x','y','z',' ','.',',','!','?','-']<\/code><\/p>\n<p>Facem o matrice 32&#215;61 adica lungimea afabetului (32 caractere) \u0219i fereastra care noi o analiz\u0103m (61 caractere).<\/p>\n<p>S\u0103 luam de exemplu textul:<br \/>\n<strong>langa casa creste un copac<\/strong><br \/>\nPrima liter\u0103 din \u0219ir &#8216;l&#8217; este \u00een afabet pe locul 11, deci primul r\u00e2nd \u00een matrice va avea totul cu 0 \u0219i doar pe locul 11 va fi scris 1.<br \/>\n<code>[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]<\/code><br \/>\nA doua liter\u0103 din \u0219ir &#8216;a&#8217;, al doilea r\u00e2nd va fi pe locul 0 scris 1 \u0219i restul va fi 0&#8230;.<br \/>\n<code>[1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]<\/code><br \/>\n\u0219i a\u0219a \u00eenc\u0103 59 de r\u00e2nduri.<\/p>\n<p>A\u0219a c\u0103p\u0103t\u0103m matricea X care trebuie servit\u0103 la intrare. Vectorul Y de la ie\u0219ire e din 3 elemente \u0219i poate fi <code>[1, 0, 0]<\/code>\u00a0sau <code>[0, 1, 0]<\/code> sau <code>[0, 0, 1]<\/code>. Mai jos urmeaz\u0103 explica\u021bia de ce Y anume a\u0219a este reprezentat.<\/p>\n<p>Matricea W \u0219i vectorul b se ini\u021bializeaz\u0103 cu ni\u0219te valori arbitrare. Apoi \u00een procesul de antrenare Deep Learning \u00eencearc\u0103 s\u0103 schimbe valorile la W \u0219i b c\u00e2t mai exact pentru ca la \u00eenmul\u021birea a c\u00e2t mai multor X*W+b \u00a0s\u0103 ne dea c\u00e2t mai exact rezultatul indicat de noi \u00een Y.<\/p>\n<p>O problem\u0103 observat\u0103 de mine aici e c\u0103 dac\u0103 folose\u0219ti LSTM doar \u00eentr-o singur\u0103 direc\u021bie, la texte scurte sau la \u00eenceputul textului, apar dificult\u0103\u021bi cu detectarea corect\u0103. A\u0219a c\u0103 am trecut la LSTM bidirec\u021bional. Modelele bidirec\u021bionale se mai folosesc \u0219i la recunoa\u0219terea vorbirii, traduceri, recunoa\u0219terea scrisului de m\u00e2n\u0103. Adic\u0103 indic re\u021belei neuronale cele 61 caractere de la dreapta la st\u00e2nga \u0219i apoi de la st\u00e2nga la dreapta.<\/p>\n<p>Schematic arat\u0103 a\u0219a:<\/p>\n<p><a href=\"http:\/\/esanu.name\/vitalie\/wp-content\/uploads\/2017\/09\/Screen-Shot-2017-09-28-at-12.42.05.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-867\" alt=\"Bidirectional LSTM\" src=\"http:\/\/esanu.name\/vitalie\/wp-content\/uploads\/2017\/09\/Screen-Shot-2017-09-28-at-12.42.05-1024x224.png\" width=\"625\" height=\"136\" \/><\/a><\/p>\n<p>Codul exprimat \u00een frameworkul Keras arat\u0103 a\u0219a:<\/p>\n<figure id=\"attachment_862\" aria-describedby=\"caption-attachment-862\" style=\"width: 615px\" class=\"wp-caption alignnone\"><a href=\"http:\/\/esanu.name\/vitalie\/wp-content\/uploads\/2017\/09\/Screen-Shot-2017-09-26-at-19.27.54.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-862\" alt=\"Modelul LSTM pentru corectarea diacriticilor \u00een limba rom\u00e2n\u0103.\" src=\"http:\/\/esanu.name\/vitalie\/wp-content\/uploads\/2017\/09\/Screen-Shot-2017-09-26-at-19.27.54-1024x161.png\" width=\"625\" height=\"98\" \/><\/a><figcaption id=\"caption-attachment-862\" class=\"wp-caption-text\">Modelul LSTM pentru corectarea diacriticilor \u00een limba rom\u00e2n\u0103.<\/figcaption><\/figure>\n<p>Pentru a face itera\u021bii c\u00e2t mai multe test\u0103m pe un text mai mic, am luat toate transcrierile de la <a title=\"Privesc.Eu\" href=\"https:\/\/www.privesc.eu\" target=\"_blank\" rel=\"noopener\">privesc.eu<\/a> (aproximativ 62 MB) \u0219i dup\u0103 un weekend am ajuns la performan\u021ba de 98.98% la training set, 98.97 la validation set \u0219i 97.71 la <a title=\"Traning set\" href=\"https:\/\/en.wikipedia.org\/wiki\/Test_set\" target=\"_blank\" rel=\"noopener\">test set<\/a> (1MB de texte din arhiva revistei <a title=\"Revista Contrafort\" href=\"http:\/\/www.contrafor.md\" target=\"_blank\" rel=\"noopener\">contrafort.md<\/a>)<\/p>\n<p>Primul lucru \u00eencercat a fost ca la ie\u0219ire re\u021beaua s\u0103 aleag\u0103 din cele\u00a0 9 caractere, <strong>a, i, s, t, \u0103, \u00e2, \u00ee, \u0219, \u021b<\/strong>. \u00cens\u0103 teoretic ar fi mai bine s\u0103 ne dea r\u0103spunsul 0 sau 1. Adic\u0103 dac\u0103 trebuie s\u0103 fie diacritic pus pe locul 31 ori nu. \u0218i deoarece noi \u0219tim c\u0103 dac\u0103 pe locul 31 este i atunci \u0219i rezultatul 1, atunci pune \u00ee. Problema e c\u0103 unii scriu cu \u00e2 din a \u0219i la a avem 3 op\u021biuni <strong>a, \u0103, \u00e2<\/strong>. Pentru a minimiza spa\u021biul de r\u0103spunsuri am ales vector cu 3 categorii <code>[1, 0, 0]<\/code> &#8211; nu este diacritic, <code>[0, 1, 0]<\/code> &#8211; este diacritic <code>[0, 0, 1]<\/code> &#8211; este \u00e2.<\/p>\n<p>Dac\u0103 a\u021bi observat nic\u0103ieri nu indic sistemului c\u0103, de fapt,\u00a0 caracterul diacritic care noi \u00eel c\u0103ut\u0103m e anume pe pozi\u021bia 31. \u00cens\u0103 dup\u0103 c\u00e2teva milioane de itera\u021bii el \u00ee\u0219i d\u0103 seama singur :D.<\/p>\n<p>Dup\u0103 training cu <a title=\"Keras\" href=\"https:\/\/keras.io\" target=\"_blank\" rel=\"noopener\">Keras<\/a> \u0219i <a title=\"Tensorflow\" href=\"https:\/\/www.tensorflow.org\" target=\"_blank\" rel=\"noopener\">Tensorflow<\/a> urm\u0103rim rezultatul cu <a title=\"Tensorboard\" href=\"https:\/\/www.tensorflow.org\/get_started\/summaries_and_tensorboard\" target=\"_blank\" rel=\"noopener\">Tensorboard<\/a>. Dac\u0103 nu avem <a title=\"Overfit\" href=\"https:\/\/en.wikipedia.org\/wiki\/Overfitting\" target=\"_blank\" rel=\"noopener\">overfit<\/a> sau <a title=\"Underfit\" href=\"https:\/\/en.wikipedia.org\/wiki\/Overfitting#Underfitting\" target=\"_blank\" rel=\"noopener\">underfit<\/a> e super, am g\u0103sit modelul corect. Am exportat modelul pentru a fi servit cu <a title=\"Tensorflow serving\" href=\"https:\/\/www.tensorflow.org\/serving\/\" target=\"_blank\" rel=\"noopener\">Tensorflow Serving<\/a>. Am arendat pe <a title=\"DigitalOcean Cloud Hosting\" href=\"https:\/\/www.digitalocean.com\" target=\"_blank\" rel=\"noopener\">Digitalocean<\/a> un server de 5$\/lun\u0103, am instalat Tensorflow Serving. Pentru a nu m\u0103 complica cu web serverul, am utilizat <a title=\"Flask\" href=\"http:\/\/flask.pocoo.org\" target=\"_blank\" rel=\"noopener\">Flask<\/a>.<\/p>\n<p>Job done.<\/p>\n<p>V\u0103 invit s\u0103 v\u0103 expune\u021bi cu idei, sugestii, laude sau critici pe site-ul <a title=\"AI Diacrice \u00een limba rom\u00e2n\u0103\" href=\"http:\/\/diacritice.ai\" target=\"_blank\" rel=\"noopener\">diacritice.ai<\/a>.<\/p>\n<p>PS: Ce urmeaz\u0103?<br \/>\nVersiunea curent\u0103 1.1 e antrenat\u0103 pe un volum de text de 1.4 GB. O epoc\u0103 (itera\u021bie peste tot textul) dureaz\u0103 aproximativ 15 ore. Urmeaz\u0103 s\u0103 \u00eencep s\u0103 fac training pe cele 7.3 GB, \u00eens\u0103 va dura c\u00e2teva s\u0103pt\u0103m\u00e2ni p\u00e2n\u0103 voi face c\u00e2teva epoci. Calitatea de 99.97% care o are sistemul acum e destul de acceptabil\u0103. Nu m\u0103 opresc aici, sper s\u0103 ajung la 99.99%\t\t<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Orice decizie care face o persoan\u0103 normal\u0103 \u00een &#8211; Andrew Ng Recent experimentam cu Inteligen\u021ba Artificial\u0103, mai exact cu Machine Learning, mai exact cu Deep Learning\u2026 pentru geeks\u2026 m\u0103 jucam cu LSTM. Mi-a venit un g\u00e2nd s\u0103 \u00eencerc s\u0103 corectez cu ajutorul LSTM diacriticele \u00een limba rom\u00e2n\u0103. Am cump\u0103rat cel mai puternic GPU g\u0103sit prin &#8230; <a title=\"AI Diacritice\" class=\"read-more\" href=\"http:\/\/esanu.name\/vitalie\/?p=861\" aria-label=\"More on AI Diacritice\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,12,28],"tags":[],"class_list":["post-861","post","type-post","status-publish","format-standard","hentry","category-ai","category-deep-learning","category-machine-learning"],"_links":{"self":[{"href":"http:\/\/esanu.name\/vitalie\/index.php?rest_route=\/wp\/v2\/posts\/861","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/esanu.name\/vitalie\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/esanu.name\/vitalie\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/esanu.name\/vitalie\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/esanu.name\/vitalie\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=861"}],"version-history":[{"count":0,"href":"http:\/\/esanu.name\/vitalie\/index.php?rest_route=\/wp\/v2\/posts\/861\/revisions"}],"wp:attachment":[{"href":"http:\/\/esanu.name\/vitalie\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=861"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/esanu.name\/vitalie\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=861"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/esanu.name\/vitalie\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=861"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}