mother thing
the meaning of meaning: a study of the influence of language upon thought and of the science of symbolism
c. k. ogden and i. a. richards 1923 not yet read
science and sanity: an introduction to non-aristotelian systems and general semantics
alfred korzybski 1970 9780937298015 to read next
blah blah blah : what to do when words don’t work
dan roam 2011 9781591844594
how to make an impact: influence, inform and impress with your reports, presentations and business documents
jon moon 2007 9780273713326
clarity and impact: inform and impress with your reports and talks
jon moon 2016 9780993585104
amazon.co.uk/Clarity-Impact-Inform-Impress-Reports/dp/0993585108
naïve readings: reveilles political and philosophic
ralph lerner 2016 to read next
to be or not to be…
oliver burkeman 2010
https://www.theguardian.com/lifeandstyle/2010/jan/16/e-prime-change-your-life
the top ten arguments against e-prime
james french
generalsemantics.org
a site about nothing
asiteaboutnothing.net/w_e-prime.html
learn in larger chunks: advantage to learning a new language before you can read
literate and preliterate children show different learning patterns in an artificial language learning task
naomi havron et al. 2018
http://dx.doi.org/10.1007/s41809-018-0015-9
Starting Big
Adults typically have problems with learning grammatical relations such as agreement between nouns and their gendered articles (is the Spanish word for problem 'la problema' or 'el problema'?). Young children are much better at learning such arbitrary relations among words. Children's superior learning skills may be due to their age and brain flexibility. However, according to Naomi Havron and her colleagues, children's advantage in grammar learning may also be due to their inability to read. This idea is based on Inbal Arnon's Starting Big hypothesis, which states that younger children are better learners because they focus more on multiword units and less on individual words. The researchers predicted that children should excel at learning certain grammatical relations between words before they become literate. After learning how to read, they should pay more attention to single words, which hinders learning relations between words.
An alien language
To test children's learning abilities, the researchers created a new language. This artificial language contained eight new nouns for existing items, such as "keba" for clock and "nadi" for chair, paired with one of two new 'gender articles': "do" or "ga." On screen, a green cartoon alien with three eyes would point at the object and say the alien equivalent of "this is the clock" (e.g. "kamek do keba"). All sentences started with "kamek" followed by a pause, but there was no pause between the article and noun. A group of 31 first graders (6-year-olds) and 27 third graders (8-year-olds) from schools in Israel listened to all sentences in the alien language for about four minutes.
The researchers then tested the children on vocabulary (nouns) and grammar (gender agreement relations). To test vocabulary, the alien would use the wrong label (calling a clock a "nadi"). To test grammar, the alien would use the wrong gender article (calling a chair "do nadi" instead of "ga nadi"). In each trial, the alien would utter both the correct and the incorrect sentence (e.g. "kamek ga nadi" and "kamek do nadi"), after which children had to decide on the correct one. All children were tested again after six months, during which time the first graders had learned how to read. For the second testing session, the researchers used a similar language with a new set of gender articles and nouns. Would literacy affect the 6-year-olds' learning patterns?
The effect of literacy
The preliterate 6-year-olds were better at learning grammatical relations than at learning nouns. Their score on grammatical relations was well above chance (64% correct), while their performance on nouns was at chance (50% correct). The 8-year-olds were equally good at learning grammar and vocabulary, scoring above 65% correct in both sessions. After only six months of reading instruction, the first graders showed the same pattern as the third graders. The now literate 6-year-olds performed equally well on grammatical relations (61% correct) and nouns (57% correct). As expected, their grammatical agreement advantage had disappeared after learning to read.
The researchers conclude that literacy affects the way children learn a new language, and may come at a cost. According to first author Naomi Havron and MPI's Limor Raviv, this finding has implications for second language teaching: exposure to written input can help word learning, but may harm some aspects of grammar learning. Although learning to read has many benefits, the authors argue that "there are advantages to learning a new language before you can read."
abstract Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts.
the impact of bilingual environments on selective attention in infancy
kyle j. comishen et al. 2019
http://dx.doi.org/10.1111/desc.12797
The advantages of growing up in a bilingual home can start as early as six months of age, according to new research led by York University's Faculty of Health. In the study, infants who are exposed to more than one language show better attentional control than infants who are exposed to only one language. This means that exposure to bilingual environments should be considered a significant factor in the early development of attention in infancy, the researchers say, and could set the stage for lifelong cognitive benefits.
The research was conducted by Ellen Bialystok, Distinguished Research Professor of Psychology and Walter Gordon Research Chair of Lifespan Cognitive Development at York University and Scott Adler, associate professor in York's Department of Psychology and the Centre for Vision Research, along with lead author Kyle J. Comishen, a former Master's student in their lab. It will be published January 30, 2019 in Developmental Science.
The researchers conducted two separate studies in which infants' eye movements were measured to assess attention and learning. Half of the infants who were studied were being raised in monolingual environments while others were being raised in environments in which they heard two languages spoken approximately half of the time each. The infants were shown images as they lay in a crib equipped with a camera and screen, and their eye movements were tracked and recorded as they watched pictures appear above them, in different areas of the screen. The tracking was conducted 60 times for each infant.
"By studying infants -- a population that does not yet speak any language -- we discovered that the real difference between monolingual and bilingual individuals later in life is not in the language itself, but rather, in the attention system used to focus on language," says Bialystok, co-senior author of the study. "This study tells us that from the very earliest stage of development, the networks that are the basis for developing attention are forming differently in infants who are being raised in a bilingual environment. Why is that important? It's because attention is the basis for all cognition."
In the first study, the infants saw one of two images in the centre of the screen followed by another image appearing on either the left or right side of the screen. The babies learned to expect that if, for example, a pink and white image appears in the centre of the screen, it would be followed by an attractive target image on the left; If a blue and yellow image appeared in the centre, then the target would appear on the right. All the infants could learn these rules.
In the second study, which began in the same way, researchers switched the rule halfway through the experiment. When they tracked the babies' eye movements, they found that infants who were exposed to a bilingual environment were better at learning the new rule and at anticipating where the target image would appear. This is difficult because they needed to learn a new association and replace a successful response with a new contrasting one.
"Infants only know which way to look if they can discriminate between the two pictures that appear in the centre," said Adler, co-senior author of the study. "They will eventually anticipate the picture appearing on the right, for example, by making an eye movement even before that picture appears on the right. What we found was that the infants who were raised in bilingual environments were able to do this better after the rule is switched than those raised in a monolingual environment."
Anything that comes through the brain's processing system interacts with this attentional mechanism, says Adler. Therefore, language as well as visual information can influence the development of the attentional system.
Researchers say the experience of attending to a complex environment in which infants simultaneously process and contrast two languages may account for why infants raised in bilingual environments have greater attentional control than those raised in monolingual environments.
In previous research, bilingual children and adults outperformed monolinguals on some cognitive tasks that require them to switch responses or deal with conflict. The reason for those differences were thought to follow from the ongoing need for bilinguals to select which language to speak. This new study pushes back the explanation to a time before individuals are actively using languages and switching between them.
"What is so ground-breaking about these results, is that they look at infants who are not bilingual yet and who are only hearing the bilingual environment. This is what's having the impact on cognitive performance," says Adler.
abstract Bilingualism has been observed to influence cognitive processing across the lifespan but whether bilingual environments have an effect on selective attention and attention strategies in infancy remains an unresolved question. In Study 1, infants exposed to monolingual or bilingual environments participated in an eye‐tracking cueing task in which they saw centrally presented stimuli followed by a target appearing on either the left or right side of the screen. Halfway through the trials, the central stimuli reliably predicted targets' locations. In Study 2, the first half of the trials consisted of centrally presented cues that predicted targets' locations; in the second half, the cue–target location relation switched. All infants performed similarly in Study 1, but in Study 2 infants raised in bilingual, but not monolingual, environments were able to successfully update their expectations by making more correct anticipatory eye movements to the target and expressing faster reactive eye latencies toward the target in the post‐switch condition. The experience of attending to a complex environment in which infants simultaneously process and contrast two languages may account for why infants raised in bilingual environments have greater attentional control than those raised in monolingual environments.
“Rules permitting pronoun drop are likely to perpetuate ancient cultural values and norms — formed and encoded in those rules in the distant past — that give primacy to the collective over the individual.
“Through such language rules, these ancestral cultural values and norms can still be effective nowadays — inducing governments and families to invest comparatively little in the education of the young, as education usually increases the independence of the individual from both the state and the extended family and may reduce his or her commitment to these institutions.
“While in many traditionally collectivist societies, collectivist norms are in retreat in contemporary culture, in such societies these ancient norms appear to live on and still adversely affect education today.”
do linguistic structures affect human capital? the case of pronoun drop
horst feldmann 2018
http://dx.doi.org/10.1111/kykl.12190
"I learn," "you learn," "she learns," "they learn," yet, according to a surprising new linguistic study, in countries where the dominant language allows personal pronouns such as 'I' to be omitted, learning suffers.
The research by Dr Horst Feldmann of the University of Bath also finds that countries where the dominant languages permit pronoun drop have lower secondary school enrollment rates. This is the first study to analyse the effects of pronoun drop rules on education. It has just been published in the journal Kyklos.
The term 'pronoun drop' refers to grammatical rules that allow speakers to drop a personal pronoun (such as 'I') when it is used as a subject of a sentence. These rules are in fact commonplace around the world -- including in Spanish, Arabic and Eastern languages such as Chinese and Japanese.
Permitting speakers to drop a personal pronoun, Dr Feldmann explained, serves to de-emphasise the significance of the individual. Whereas languages of traditionally collectivist cultures do not require the use of 'I' in sentences, countries where personal pronouns are enforced tend to be more individualistic in their cultural traditions. Examples include English, as well as German and Scandinavian languages.
Dr Feldmann's study covers an exceptionally large number of individuals and countries. Specifically, to estimate the effect on peoples' educational attainment he used data on more than 114,000 individuals from 75 countries. To estimate the effect on enrollment rates he used data on 101 countries.
In both cases Dr Feldmann found that the magnitude of the effect is substantial, particularly among females.
Women who normally speak a pronoun drop language at home are roughly 10% less likely to have completed secondary or tertiary education than women who speak a non-pronoun drop language.
Equally, countries in which popularly-spoken languages permit personal pronoun drop have secondary enrollment rates of around 10% lower among girls, compared with countries in which the popularly spoken languages require the use of personal pronouns. In both types of analysis, the magnitude of the effect is slightly lower for males.
Dr Horst Feldmann of the University's Department of Economics explained: "Rules permitting pronoun drop are likely to perpetuate ancient cultural values and norms -- formed and encoded in those rules in the distant past -- that give primacy to the collective over the individual.
"Through such language rules, these ancestral cultural values and norms can still be effective nowadays -- inducing governments and families to invest comparatively little in the education of the young, as education usually increases the independence of the individual from both the state and the extended family and may reduce his or her commitment to these institutions.
"While in many traditionally collectivist societies, collectivist norms are in retreat in contemporary culture, in such societies these ancient norms appear to live on and still adversely affect education today."
Dr Feldmann included in his analysis numerous variables to statistically control for the impact of other factors influencing educational attainment and enrollment. These include income per person and religion.
The study builds on other recent economic research that highlights how linguistic structures, such as gender distinctions in grammar, can also affect both individual behaviour and collective outcomes.
abstract This paper empirically studies the human capital effects of grammatical rules that permit speakers to drop a personal pronoun when used as a subject of a sentence. By de‐emphasizing the significance of the individual, such languages may perpetuate ancient values and norms that give primacy to the collective, inducing governments and families to invest relatively little in education because education usually increases the individual's independence from both the state and the family and may thus reduce the individual's commitment to these institutions. Carrying out both an individual‐level and a country‐level analysis, the paper indeed finds negative effects of pronoun‐drop languages. The individual‐level analysis uses data on 114,894 individuals from 75 countries over 1999‐2014. It establishes that speakers of such languages have a lower probability of having completed secondary or tertiary education, compared with speakers of languages that do not allow pronoun drop. The country‐level analysis uses data from 101 countries over 1972‐2012. Consistent with the individual‐level analysis, it finds that countries where the dominant languages permit pronoun drop have lower secondary school enrollment rates. In both cases, the magnitude of the effect is substantial, particularly among females.
a machine learning approach to predicting psychosis using semantic density and latent content analysis
neguine rezaii et al. 2019
http://doi.org/10.1038/s41537-019-0077-9
A machine-learning method discovered a hidden clue in people's language predictive of the later emergence of psychosis -- the frequent use of words associated with sound. A paper published by the journal npj Schizophrenia published the findings by scientists at Emory University and Harvard University.
The researchers also developed a new machine-learning method to more precisely quantify the semantic richness of people's conversational language, a known indicator for psychosis.
Their results show that automated analysis of the two language variables -- more frequent use of words associated with sound and speaking with low semantic density, or vagueness -- can predict whether an at-risk person will later develop psychosis with 93 percent accuracy.
Even trained clinicians had not noticed how people at risk for psychosis use more words associated with sound than the average, although abnormal auditory perception is a pre-clinical symptom.
"Trying to hear these subtleties in conversations with people is like trying to see microscopic germs with your eyes," says Neguine Rezaii, first author of the paper. "The automated technique we've developed is a really sensitive tool to detect these hidden patterns. It's like a microscope for warning signs of psychosis."
Rezaii began work on the paper while she was a resident at Emory School of Medicine's Department of Psychiatry and Behavioral Sciences. She is now at fellow in Harvard Medical School's Department of Neurology.
"It was previously known that subtle features of future psychosis are present in people's language, but we've used machine learning to actually uncover hidden details about those features," says senior author Phillip Wolff, a professor of psychology at Emory. Wolff's lab focuses on language semantics and machine learning to predict decision-making and mental health.
"Our finding is novel and adds to the evidence showing the potential for using machine learning to identify linguistic abnormalities associated with mental illness," says co-author Elaine Walker, an Emory professor of psychology and neuroscience who researches how schizophrenia and other psychotic disorders develop.
The onset of schizophrenia and other psychotic disorders typically occurs in the early 20s, with warning signs -- known as prodromal syndrome -- beginning around age 17. About 25 to 30 percent of youth who meet criteria for a prodromal syndrome will develop schizophrenia or another psychotic disorder.
Using structured interviews and cognitive tests, trained clinicians can predict psychosis with about 80 percent accuracy in those with a prodromal syndrome. Machine-learning research is among the many ongoing efforts to streamline diagnostic methods, identify new variables, and improve the accuracy of predictions.
Currently, there is no cure for psychosis.
"If we can identify individuals who are at risk earlier and use preventive interventions, we might be able to reverse the deficits," Walker says. "There are good data showing that treatments like cognitive-behavioral therapy can delay onset, and perhaps even reduce the occurrence of psychosis."
For the current paper, the researchers first used machine learning to establish "norms" for conversational language. They fed a computer software program the online conversations of 30,000 users of Reddit, a social media platform where people have informal discussions about a range of topics. The software program, known as Word2Vec, uses an algorithm to change individual words to vectors, assigning each one a location in a semantic space based on its meaning. Those with similar meanings are positioned closer together than those with far different meanings.
The Wolff lab also developed a computer program to perform what the researchers dubbed "vector unpacking," or analysis of the semantic density of word usage. Previous work has measured semantic coherence between sentences. Vector unpacking allowed the researchers to quantify how much information was packed into each sentence.
After generating a baseline of "normal" data, the researchers applied the same techniques to diagnostic interviews of 40 participants that had been conducted by trained clinicians, as part of the multi-site North American Prodrome Longitudinal Study (NAPLS), funded by the National Institutes of Health. NAPLS is focused on young people at clinical high risk for psychosis. Walker is the principal investigator for NAPLS at Emory, one of nine universities involved in the 14-year project.
The automated analyses of the participant samples were then compared to the normal baseline sample and the longitudinal data on whether the participants converted to psychosis.
The results showed that higher than normal usage of words related to sound, combined with a higher rate of using words with similar meaning, meant that psychosis was likely on the horizon.
Strengths of the study include the simplicity of using just two variables -- both of which have a strong theoretical foundation -- the replication of the results in a holdout dataset, and the high accuracy of its predictions, at above 90 percent.
"In the clinical realm, we often lack precision," Rezaii says. "We need more quantified, objective ways to measure subtle variables, such as those hidden within language usage."
Rezaii and Wolff are now gathering larger data sets and testing the application of their methods on a variety of neuropsychiatric diseases, including dementia.
"This research is interesting not just for its potential to reveal more about mental illness, but for understanding how the mind works -- how it puts ideas together," Wolff says. "Machine learning technology is advancing so rapidly that it's giving us tools to data mine the human mind."
abstract Subtle features in people’s everyday language may harbor the signs of future mental illness. Machine learning offers an approach for the rapid and accurate extraction of these signs. Here we investigate two potential linguistic indicators of psychosis in 40 participants of the North American Prodrome Longitudinal Study. We demonstrate how the linguistic marker of semantic density can be obtained using the mathematical method of vector unpacking, a technique that decomposes the meaning of a sentence into its core ideas. We also demonstrate how the latent semantic content of an individual’s speech can be extracted by contrasting it with the contents of conversations generated on social media, here 30,000 contributors to Reddit. The results revealed that conversion to psychosis is signaled by low semantic density and talk about voices and sounds. When combined, these two variables were able to predict the conversion with 93% accuracy in the training and 90% accuracy in the holdout datasets. The results point to a larger project in which automated analyses of language are used to forecast a broad range of mental disorders well in advance of their emergence.
text to speech
speech synthesis from neural decoding of spoken sentences
gopala k. anumanchipalli et al. 2019
http://dx.doi.org/10.1038/s41586-019-1119-1
generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract -- an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.
Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson's disease, multiple sclerosis, and amyotrophic lateral sclerosis (ALS, or Lou Gehrig's disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.
The new system being developed in the laboratory of Edward Chang, MD -- described April 24, 2019 in Nature -- demonstrates that it is possible to create a synthesized version of a person's voice that can be controlled by the activity of their brain's speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker's emotions and personality.
"For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual's brain activity," said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. "This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss."
Virtual Vocal Tract Improves Naturalistic Speech Synthesis
The research was led by Gopala Anumanchipalli, PhD, a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain's speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.
From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.
"The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one," Anumanchipalli said. "We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals."
In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center -- patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery -- to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.
Based on the audio recordings of participants' voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.
This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two "neural network" machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant's voice.
The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants' brain activity without the inclusion of simulations of the speakers' vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.
As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers' overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.
"We still have a ways to go to perfectly mimic spoken language," Chartier acknowledged. "We're quite good at synthesizing slower speech sounds like 'sh' and 'z' as well as maintaining the rhythms and intonations of speech and the speaker's gender and identity, but some of the more abrupt sounds like 'b's and 'p's get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what's currently available."
Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance
The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can't speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.
Preliminary results from one of the team's research participants suggest that the researchers' anatomically based system can decode and synthesize novel sentences from participants' brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker's voice.
The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject's vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant's brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.
"People who can't move their arms and legs have learned to control robotic limbs with their brains," Chartier said. "We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract."
Added Anumanchipalli, "I'm proud that we've been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients."
abstract Technology that translates neural activity into speech would be transformative for people who are unable to communicate as a result of neurological impairments. Decoding speech from neural activity is challenging because speaking requires very precise and rapid multi-dimensional control of vocal tract articulators. Here we designed a neural decoder that explicitly leverages kinematic and sound representations encoded in human cortical activity to synthesize audible speech. Recurrent neural networks first decoded directly recorded cortical activity into representations of articulatory movement, and then transformed these representations into speech acoustics. In closed vocabulary tests, listeners could readily identify and transcribe speech synthesized from cortical activity. Intermediate articulatory dynamics enhanced performance even with limited data. Decoded articulatory representations were highly conserved across speakers, enabling a component of the decoder to be transferrable across participants. Furthermore, the decoder could synthesize speech when a participant silently mimed sentences. These findings advance the clinical viability of using speech neuroprosthetic technology to restore spoken communication.
cantodict online dictionary
cantonese.sheik.co.uk/dictionary/
add meaning?
cantonese.sheik.co.uk/phorum/read.php
cantodict search using “shortcuts” share extension on highlighted text
https://www.icloud.com/shortcuts/eb47566e36d0479898f9a10e796e7333
creating audio mandarin or cantonese flashcards in anki · lucwastiaux/chinese wiki · github
https://github.com/lucwastiaux/chinese/wiki/creating-audio-mandarin-or-cantonese-flashcards-in-anki
awesometts: easily add text-to-speech to your anki cards
https://ankiatts.appspot.com/
stroke order search using “shortcuts” share extension on highlighted text
https://www.icloud.com/shortcuts/48d55bb6477e44d2b0825d3075df4c71
cc-canto online dictionary
cantonese.org/
cc-canto search using “shortcuts” share extension on highlighted text
https://www.icloud.com/shortcuts/37ef275202b840bcb4265e694de6f2a5
bbc newsweek cantonese
how to type cantonese on macos with squirrel
阿擇 (chaaak) 2018
https://medium.com/@chaaak/how-to-type-cantonese-on-macos-with-squirrel-3d620f7d04c
chinese subtitles (simultaneous with english on netflix with desktop chrome browser
languagelearningwithnetflix.com/catalogue.html#language=Chinese&country=United Kingdom
shirabe jisho offline dictionary
https://itunes.apple.com/gb/app/shirabe-jisho/id1005203380?mt=8
has stroke orders for many characters and can auto–search clipboard if option is chosen (trigger by switching to jisho app in fullscreen mode)
also has many stroke animations
japanese subtitles (simultaneous with english) on netflix with desktop chrome browser
languagelearningwithnetflix.com/catalogue.html#language=Japanese&country=United Kingdom
a new french keyboard layout via algorithm
http://aalto.fi/news/changing-how-a-country-types
"Algorithms, like the ones we have developed for the French keyboard, can help us make better decisions. They can quickly evaluate the problems and benefits of different designs and achieve fair compromises. But they also need the guidance from humans who know about the problem," explains Dr Anna Maria Feit, the lead researcher of the project.
With concern voiced by the French government in 2015 on the existing keyboard -- and its inability to support the proper use of French -- priority was on creating a new standard that allows easy and quick use of required symbols. The algorithm created by the Aalto University-led team automatically arranged the characters in an optimal way.
The new AZERTY standard includes commonly used characters in the French language, such as œ, " ," or É, as well as 60 other new characters, which are arranged in groups predicted by the algorithm, making the layout more intuitive to use. Characters like @ and / have been moved to more accessible locations, as have ligatures and accents.
"When rearranging the symbols on the keyboard, there are conflicting things to consider," says Feit, who completed her doctoral studies at Aalto and now works at ETH Zurich.
"Characters that get used the most should be moved to a position that is easy to reach. But if you move it a long way from where it was originally, people will take a long time to learn it and be less likely to use your new layout. You might also want to keep symbols that look similar and have similar functions together to make them easier to find and use, like the colon and the semi-colon, even though one gets used more than the other," she explains.
To inform the design, researchers built statistical models of character use in modern French, drawing on newspaper articles, French Wikipedia, legal texts, as well as emails, social media, and programming code. In contrast to previous work that assumes people use their fingers in certain ways, they gathered the key presses of over 900 people in a large-scale crowdsourcing study to see what counted as an 'easy' key press. In addition, they included state-of-the-art findings from ergonomics literature.
"The trick to making the collaboration effective was using our data to build a tool that the French experts in the standardisation committee could put different conditions into, and see what the optimal keyboard that resulted from the data looked like," says Aalto University Professor Antti Oulasvirta.
Dr Mathieu Nancel, a former researcher at Aalto now based at Inria Lille -- Nord Europe in France, brought the algorithm to the French committee and helped them to work with it. "Before we started working together, they tried to place over 100 characters by hand. Our tool allowed them to focus on higher-level goals, such as making typing special characters fast or keeping it similar to the previous layout," he says.
"Together with the committee, we tried different parameters and discussed the layouts suggested by the computer algorithm. We could also change the layout by hand and the tool would tell us how this impacted typing speed or ergonomics. We then adapted the underlying computer model to also take into account, for example, cultural aspects and comments from the French public," Nancel adds.
The algorithm that Dr Feit and team produced for the French committee can easily be adapted for any language; it simply requires data for modelling. Most countries use the standard QWERTY keyboard -- originally designed for the English language -- despite frequently used accented characters or different styles of punctuation. Dr Feit hopes that the model produced for France could be used by other groups in the future.
"Our goal is that in the future people and algorithms design user interfaces together," she says.
relating natural language aptitude to individual differences in learning programming languages
chantel s. prat et al. 2020
http://dx.doi.org/10.1038/s41598-020-60661-8
a natural aptitude for learning languages is a stronger predictor of learning to program than basic math knowledge, or numeracy. That’s because writing code also involves learning a second language, an ability to learn that language’s vocabulary and grammar, and how they work together to communicate ideas and intentions. Other cognitive functions tied to both areas, such as problem solving and the use of working memory, also play key roles.
“Many barriers to programming, from prerequisite courses to stereotypes of what a good programmer looks like, are centered around the idea that programming relies heavily on math abilities, and that idea is not born out in our data,” said lead author Chantel Prat, an associate professor of psychology at the UW and at the Institute for Learning & Brain Sciences. “Learning to program is hard, but is increasingly important for obtaining skilled positions in the workforce. Information about what it takes to be good at programming is critically missing in a field that has been notoriously slow in closing the gender gap.”
Published online March 2 in Scientific Reports, an open-access journal from the Nature Publishing Group, the research examined the neurocognitive abilities of more than three dozen adults as they learned Python, a common programming language. Following a battery of tests to assess their executive function, language and math skills, participants completed a series of online lessons and quizzes in Python. Those who learned Python faster, and with greater accuracy, tended to have a mix of strong problem-solving and language abilities.
In today’s STEM-focused world, learning to code opens up a variety of possibilities for jobs and extended education. Coding is associated with math and engineering; college-level programming courses tend to require advanced math to enroll and they tend to be taught in computer science and engineering departments. Other research, namely from UW psychology professor Sapna Cheryan, has shown that such requirements and perceptions of coding reinforce stereotypes about programming as a masculine field, potentially discouraging women from pursuing it.
But coding also has a foundation in human language: Programming involves creating meaning by stringing symbols together in rule-based ways.
Though a few studies have touched on the cognitive links between language learning and computer programming, some of the data is decades old, using languages such as Pascal that are now out of date, and none of them used natural language aptitude measures to predict individual differences in learning to program.
So Prat, who specializes in the neural and cognitive predictors of learning human languages, set out to explore the individual differences in how people learn Python. Python was a natural choice, Prat explained, because it resembles English structures such as paragraph indentation and uses many real words rather than symbols for functions.
To evaluate the neural and cognitive characteristics of “programming aptitude,” Prat studied a group of native English speakers between the ages of 18 and 35 who had never learned to code.
Before learning to code, participants took two completely different types of assessments. First, participants underwent a five-minute electroencephalography scan, which recorded the electrical activity of their brains as they relaxed with their eyes closed. In previous research, Prat showed that patterns of neural activity while the brain is at rest can predict up to 60% of the variability in the speed with which someone can learn a second language (in that case, French).
“Ultimately, these resting-state brain metrics might be used as culture-free measures of how someone learns,” Prat said.
Then the participants took eight different tests: one that specifically covered numeracy; one that measured language aptitude; and others that assessed attention, problem-solving and memory.
To learn Python, the participants were assigned 10 45-minute online instruction sessions using the Codeacademy educational tool. Each session focused on a coding concept, such as lists or if/then conditions, and concluded with a quiz that a user needed to pass in order to progress to the next session. For help, users could turn to a “hint” button, an informational blog from past users and a “solution” button, in that order.
From a shared mirror screen, a researcher followed along with each participant and was able to calculate their “learning rate,” or speed with which they mastered each lesson, as well as their quiz accuracy and the number of times they asked for help.
After completing the sessions, participants took a multiple-choice test on the purpose of functions (the vocabulary of Python) and the structure of coding (the grammar of Python). For their final task, they programmed a game — Rock, Paper, Scissors — considered an introductory project for a new Python coder. This helped assess their ability to write code using the information they had learned.
Ultimately, researchers found that scores from the language aptitude test were the strongest predictors of participants’ learning rate in Python. Scores from tests in numeracy and fluid reasoning were also associated with Python learning rate, but each of these factors explained less variance than language aptitude did.
Presented another way, across learning outcomes, participants’ language aptitude, fluid reasoning and working memory, and resting-state brain activity were all greater predictors of Python learning than was numeracy, which explained an average of 2% of the differences between people. Importantly, Prat also found that the same characteristics of resting-state brain data that previously explained how quickly someone would learn to speak French, also explained how quickly they would learn to code in Python.
“This is the first study to link both the neural and cognitive predictors of natural language aptitude to individual differences in learning programming languages. We were able to explain over 70% of the variability in how quickly different people learn to program in Python, and only a small fraction of that amount was related to numeracy,” Prat said.
abstract This experiment employed an individual differences approach to test the hypothesis that learning modern programming languages resembles second “natural” language learning in adulthood. Behavioral and neural (resting-state EEG) indices of language aptitude were used along with numeracy and fluid cognitive measures (e.g., fluid reasoning, working memory, inhibitory control) as predictors. Rate of learning, programming accuracy, and post-test declarative knowledge were used as outcome measures in 36 individuals who participated in ten 45-minute Python training sessions. The resulting models explained 50–72% of the variance in learning outcomes, with language aptitude measures explaining significant variance in each outcome even when the other factors competed for variance. Across outcome variables, fluid reasoning and working-memory capacity explained 34% of the variance, followed by language aptitude (17%), resting-state EEG power in beta and low-gamma bands (10%), and numeracy (2%). These results provide a novel framework for understanding programming aptitude, suggesting that the importance of numeracy may be overestimated in modern programming education environments.
technolingualism: the mind and the machine
james pfrehm 2018
radical candor: how to get what you want by saying what you mean
kim scott 2017
don’t sleep, there are snakes: life and language in the amazonian jungle
daniel everett 2009
louder than words: the new science of how the mind makes meaning
benjamin bergen 2012
language at the speed of sight: how we read, why so many can’t, and what can be done about it
mark seidenberg 2017
the geography of thought how asians and westerners think differently…and why
richard nisbett 2010
the chinese typewriter: a history
thomas mullaney 2017
typing polytonic greek on mac
available in ios via external bluetooth keyboard (add Greek as keyboard option in settings, attach bluetooth keyboard, select hardware keyboard option Greek polytonic), but very usable on mac:
system preferences - keyboard - input sources; add greek - polytonic
system preferences - keyboard - shortcuts - input sources; select a key combination for changing the input source, e.g. hyper2–`
type a combination of keys to specify accents for the letter, before you enter the accented letter:
|
—: | :— | :—
smooth breather | ᾽ | ‘
rough breather | ῾ | “
acute | ´ | ;
diaresis | ¨ | :
circumflex | ῀ | [
iota subscript | ι | {
grave | ` | ]
grave (also) | ` | }
circumflex with smooth breather | ῏ | -
circumflex with rough breather | ῟ | _
acute with smooth breather | ῎ | /
acute with rough breather | ῞ | ?
you can get by with the single combining accents until you are comfortable with them, and then add the other keys as you gain speed. There are several other keys you can use, for grave with smooth or rough breather, diaresis with acute etc. ( ῍ ), ( ῝ ), ( ΅ )
Keyboardmaestro is very helpful in that it can keep the “locations” of its own shortcut keys the same across different input keyboards — for example when I use my normal em–space key combination, it is typed using the same physical keys on the keyboard and I don’t have to shift gears to use the other alphabet for those shortcuts (I still have to shift for system–integrated i.e. non–keyboardmaestro shortcuts such as copy and paste).
for rarer accents such as ῑ, ῡ, ᾱ, ῒ, ΐ, ῐ, ᾰ, I use keyboardmaestro macros to insert the characters; for even rarer accents such as ᾱ́, ῑ́, ῡ́, ᾰ́, ῐ́, I also use keyboardmaestro macros but need additionally to use the unicode combining acute ( ́ ) NB you can copy and paste that combining acute into your own macros to use it, or search for “combining acute” in the character viewer and copy from there.
keyboardmaestro is also very helpful for creating emacs–style bindings for often–used keys combinations that would be awkward to type in sublime text when the keyboard input method is set to greek polytonic, e.g. I set hyper2–l (dvorak equivalent ctrl–n) to do “move cursor down to next line, after inserting a full stop”, hyper2–d (dvorak equivalent ctrl–e) to do “move cursor to end of line”.
hoplite polytonic greek ios keyboard
https://itunes.apple.com/gb/app/hoplite-greek-keyboard/id1200319047?mt=8
agk polytonic greek ios keyboard
https://itunes.apple.com/gb/app/agk-ancient-greek-keyboard/id1018791342?mt=8
appears to use only combining unicode accents, as is recommended in unicode standard. so in certain fonts may not be usable for any character with multiple accents or capitals, for example Ὂ (combining accent) instead of Ὂ (non–combining accent) appears strange in helvetica neue thin but fine in helvetica neue regular Ὂ and Ὂ because what apple’s ios font system is doing is substituting a different font for characters which it cannot represent in the chosen font (notice how the non–combining accent in helvetica neue thin is a different stroke width)
Link: en.wikipedia.org/wiki/Greek_diacritics
supports archaic characters like digamma Ϝ wau
Link: en.wikipedia.org/wiki/Archaic_Greek_alphabets
未来へ/後來
miraie/houlai
kiroro/劉若英
漢字
Link: run-workflow
☆ほら 足元を見てごらん
これがあなたの歩む道
ほら 前を見てごらん
あれがあなたの未来
母がくれたたくさんの優しさ
愛を抱いて歩めと繰り返した
あの時はまだ幼くて意味など知らない
そんな私の手を握り
一緒に歩んできた
夢はいつも空高くあるから
届かなくて怖いね だけど追い続けるの
自分の物語だからこそ諦めたくない
不安になると手を握り
一緒に歩んできた
★その優しさを時には嫌がり
離れた母へ素直になれず
☆繰り返す
★繰り返す
☆繰り返す x2
未来へ向かって
ゆっくりと歩いて行こう
english
☆ほら 足元を見てごらん
hora ASHIMOTOwoMItegoran
look where you stand
これがあなたの歩む道
koregaanatanoAYUmuMICHI
see the path you walk upon
ほら 前を見てごらん
hora MAEwoMItegoran
look straight ahead
あれがあなたの未来
aregaanatanoMIRAI
there lies your future
母がくれたたくさんの優しさ
HAHAgakuretatakusannoYASAshisa
mother gave me so much tenderness
愛を抱いて歩めと繰り返した
AIwoIDAiteAYUmetoKUriKAEshita
embrace love and walk on, she would always say
あの時はまだ幼くて意味など知らない
anoTOKIwamadaOSANAkuteIMInadoSHIranai
back then in my youth, i knew not what she meant
そんな私の手を握り
sonnaWATASHInoTEwoNIGIri
she held my little hand
一緒に歩んできた
ISSHOniAYUndekita
and we walked together
夢はいつも空高くあるから
YUMEwaitsumoSORATAKAkuarukara
as dreams soar in the sky so high
届かなくて怖いね だけど追い続けるの
TODOkanakuteKOWAine dakedoOiTSUZUkeruno
awesome, far from reach, but yet we chase them
自分の物語だからこそ諦めたくない
JIBUNnoSUTORIdakarakosoAKIRAmetakunai
for we mustn’t give up our own story
不安になると手を握り
FUANninarutoTEwoNIGIri
when i felt unsure, we would
一緒に歩んできた
ISSHOniAYUndekita
walk that way together
★その優しさを時には嫌がり
sonoYASAshisawoTOKIniwaIYAgari
though at times i couldn’t accept that kindness
離れた母へ素直になれず
HANAretaHAHAeSUNAOninarezu
now we’re far apart, i want to embrace her words
☆繰り返す
★繰り返す
☆繰り返す x2
未来へ向かって
MIRAIeMUkatte
face your future
ゆっくりと歩いて行こう
yukkuritoARUiteYUko
and step by step, go on
中文
Link: run-workflow
後來 我總算學會了 如何去愛
可惜你 早已遠去 消失在人海
後來 終於在眼淚中明白
有些人 一旦錯過就不再
梔子花 白花瓣 落在我藍色百褶裙上
愛你 你輕聲說
我低下頭 聞見一陣芬芳
那個永恆的夜晚 十七歲仲夏
你吻我的那個夜晚
讓我往後的時光 每當有感嘆
總想起 當天的星光
那時候的愛情 為什麼就能那樣簡單
而又是為什麼 人年少時
一定要讓深愛的人受傷
在這相似的深夜裡 你是否一樣
也在靜靜追悔感傷
如果當時我們能 不那麼倔強
現在也 不那麼遺憾
你都如何回憶我 帶著笑或是很沉默
這些年來 有沒有人能讓你不寂寞
永遠不會再重來
有一個男孩 愛著那個女孩
english
後來 我總算學會了 如何去愛
hòulái / wǒ zǒngsuàn xué huì liǎo rúhé qù ài
since then, i’ve finally learned how to love
可惜你 早已遠去 消失在人海
kěxí nǐ zǎoyǐ yuǎn qù xiāoshī zài rén hǎi
but sadly you’ve disappeared into the sea of faces
後來 終於在眼淚中明白
hòulái / zhōngyú zài yǎnlèi zhōng míngbái
since then, though all the tears, i’ve finally understood
有些人 一旦錯過就不再
yǒuxiē rén yīdàn cuòguò jiù bù zài
that you can only miss somebody once
梔子花 白花瓣 落在我藍色百褶裙上
zhī zi huābái huābàn luò zài wǒ lán sè bǎi zhě qún shàng
white petals of orange flowers fall on my blue pleated skirt
愛你 你輕聲說
ài nǐ nǐ qīngshēng shuō
“i love you,” you softly say
我低下頭 聞見一陣芬芳
wǒ dīxià tou wén jiàn yīzhèn fēnfāng
i lowered my head and smelled the burst of fragrance
那個永恆的夜晚 十七歲仲夏
nàgè yǒnghéng de yèwǎn shíqī suì zhòngxià
that eternal night; at seventeen, midsummer
你吻我的那個夜晚
nǐ wěn wǒ dì nàgè yèwǎn
that night you kissed me
讓我往後的時光 每當有感嘆
ràng wǒ wǎng hòu de shíguāng měi dāng yǒu gǎntàn
letting me in my future days, when i feel like sighing
總想起 當天的星光
zǒng xiǎngqǐ dàngtiān de xīngguāng
to always remember that day’s starlight
那時候的愛情 為什麼就能那樣簡單
nà shíhòu de àiqíng wèishéme jiù néng nàyàng jiǎndān
that love then, why was it so simple like that?
而又是為什麼 人年少時
ér yòu shì wèishéme rén niánshào shí
and also, why when we are young
一定要讓深愛的人受傷
yīdìng yào ràng shēn ài de rén shòushāng
must we let those whom we love be hurt?
在這相似的深夜裡 你是否一樣
zài zhè xiāngsì de shēnyè lǐ nǐ shìfǒu yīyàng
on this similar deep of night, are you the same?
也在靜靜追悔感傷
yě zài jìng jìng zhuīhuǐ gǎnshāng
also feeling the quiet hurt of regret?
如果當時我們能 不那麼倔強
rúguǒ dāngshí wǒmen néng bù nàme juéjiàng
if at the time we could have been less stubborn
現在也 不那麼遺憾
xiànzài yě bù nàme yíhàn
we would not feel regret now
你都如何回憶我 帶著笑或是很沉默
nǐ dōu rúhé huíyì wǒ dàizhe xiào huò shì hěn chénmò
how are you remembering me? does it bring you laughter or silence?
這些年來 有沒有人能讓你不寂寞
zhèxiē niánlái yǒu méiyǒu rén néng ràng nǐ bù jìmò
these past years, was there someone who could let you not be lonely?
永遠不會再重來
yǒngyuǎn bù huì zài chóng lái
it will never happen again
有一個男孩 愛著那個女孩
yǒu yīgè nánhái àizhe nàgè nǚhái
to have that boy in love with that girl
for things you want to actively recall
anki spaced repetition system
apps.ankiweb.net/
learning tricks
make a game of doing enough reviews in anki to fill one column of characters, writing each one out once while reviewing. in addition to this, do not stop at one full column, but continue on for a few words of the next. this makes it easier to resume and keep going when you return to reviewing. if i intersperse one column worth of reviewing with a short fun task, i find i feel more willing to return to reviewing. it could be something as simple as changing the writing hand from right to left, or changing the writing style from vertical ink brush grip to loose extended stylus grip
current usage: anki on left third of screen, with anki top right corner set for edit, other anki areas set for answer-easy except for centre answer-hard. writing in concepts along left hand edge, single column, moved offscreen-left when column complete (here multiple columns pulled out for show) safari in slideover on right hand side for cantodict lookup
cantonese decks
https://ankiweb.net/shared/decks/Cantonese
Cantonese 廣東話/粵語 Beta v0.7
87.66MB. 22886 audio & 1 images. Updated 2015-05-28.
https://ankiweb.net/shared/info/1719293397
japanese decks
https://ankiweb.net/shared/decks/japanese
Japanese Core 2000 - Complete 01 - 09
240.58MB. 3989 audio & 1959 images. Updated 2016-11-15.
https://ankiweb.net/shared/info/1723306405
there is an in-built night mode, or you can use ios smart invert colours
invert colours and colour filters, with concepts as writing scratchpad, supporting anki
you can also do something similar with procreate, it’s around the same level of difficulty setting up from scratch (you don't get vector or infinite canvas using procreate but you do get a easy darker background)
or just use pencil and paper and your present phone for anki…
i find the pen tool at 0.5 point width (100% magnification) and 95% opacity nice — but if you change magnification often, consider the (constant width) wire tool instead. it won’t look as organic at 100%, but may look better at different magnifications because it stays the same width on screen no matter the magnification
concepts on right hand side for right handed writing (this example uses built-in night mode with colour filters — note the separation bar is black)
i set the “non-pencil” finger action to pan the screen, to use the replacement stylus cap for this which makes getting fresh scratchpad space easy. also, consider erasing the layer (you may need to create a new later first) every so often so the app doesn’t use up too much memory
invert colours and slide-over
the “two-finger tap” to undo setting is useful in this context as well, to erase the last stroke, “four-finger tap” set to hide menus
Double-tap the eraser button to open a Quick Clear menu. Here you can choose to delete everything, or clear all strokes, images or text.
in procreate, have set three finger scrub for this
disable rotation and magnification to prevent accidental changes. at default resolution, procreate gel pen 2% is fuzzier but more artistic, concepts pen is cleaner, more technical, more correct (fast end ticks are not auto-corrected)
search deck:current to show all cards
search in specific field is “exact match only” so need wildcards to match e.g.
deck:current reading:*cit*
i set new cards to show first daily, so even if i can’t make time one day to review old cards, i still learn some new ones. this also batches the editing of cards together (my habit is to alter cards when i first see them) so that they occur first when i am most alert
advanced level
user manual
Link: apps.ankiweb.net/docs/manual.html
requires computer for advanced card types
swap profiles quickly by tap–and–hold on the profile name in the centre of the titlebar
collaborative anki decks using google docs
reddit.com/r/Anki/comments/dj8mb6/having_success_with_collaborative_anki_decks/
untried
osculator mac + wiimote
https://osculator.net/
lemur
https://itunes.apple.com/gb/app/lemur/id481290621?mt=8
android tablet + wiimote
http://innominatethoughts.com/technology/flashcards-like-a-boss-with-anki/
unified remote
https://itunes.apple.com/gb/app/unified-remote/id825534179?mt=8
mainstage
https://www.apple.com/mainstage/
wii drums osculator ableton live
http://blog.abletondrummer.com/wii-drums-to-ableton-live-via-osculator/
archive
25 free–mac £17.49ios flashcards for learning through repeat exposure. great for learning languages etc. takes a lot of time to set up, but worth it. if I didn’t use pythonista programs (28, 29) for text editing on the iphone I would use the desktop version for data entry/editing
26 for anki mobile; shinsu handwritten and fangsong printed traditional chinese fonts, download at 27
even language becomes corrupt, which is why so many who realised this insist we cannot find the way through language. yet language is mostly what we have to reach each other.
false purity insists that there is an ideal purity, untainted. by culturing this false purity, real purity is corrupted, and is buried from our awareness.