mike
indigenous knowledge networks in the face of global change
rodrigo cámara-leret et al. 2019
http://dx.doi.org/10.1073/pnas.1821843116
Plants play an important role for most indigenous communities in South America, and not merely as a source of food. They also provide the raw material for building materials, tools, medicine, and much more. The extinction of a plant species therefore also endangers the very foundation of these people's way of life.
But there is another threat that has more or less gone unnoticed: The disappearance of the knowledge of what the different plant species are used for. The problem is that this is not written down. Passed down as a cultural inheritance, it exists only in the minds of the people -- and could therefore vanish almost unnoticed. "Very little is known about how vulnerable this knowledge is in the context of current global change," says Jordi Bascompte, professor of ecology at the University of Zurich. "There is therefore an urgent need to find out how biological and cultural factors interact with each other in determining the services provided by biodiversity.."
Analysis of the use of palm trees
Consequently, Jordi Bascompte and his postdoc Miguel A. Fortuna teamed up with Rodrigo Cámara-Leret from the Royal Botanic Gardens in the UK to study these interactions on a large scale for the first time. For their study, they analyzed knowledge held by 57 indigenous communities in the Amazon basin, the Andes and the Chocó region to collate their knowledge of palm trees. The researchers then depicted the different palm species and their uses in graphical form in a network, from which they could identify the local and regional links between the knowledge of indigenous communities.
Each community knew around 18 palm species and 36 different possible uses on average. For example, the fruit is eaten, dried leaves are woven into hammocks and the trunks can be split and laid as flooring in huts. The study revealed that the knowledge of the different communities only overlapped partially, even with respect to the same species of palm.
Minimal loss of knowledge still has consequences
Using simulations, the researchers analyzed what would happen if knowledge of a particular species or use were lost. They found that the network is extremely fragile, with the loss of just a few components having the potential to make an enormous impact on the entire system: "In this context, cultural diversity is just as important as biological diversity," says Jordi Bascompte. "In particular, the simultaneous loss of plant species and cultural inheritance leads to a much faster disintegration of the indigenous knowledge network."
Importance of cultural and biological factors
Bascompte and his colleagues concluded that, to date, too little attention has been paid to cultural factors. "The focus is typically directed toward the extinction of plant species. However, the irreplaceable knowledge that is gradually disappearing from indigenous communities is equally important for the service that an ecosystem provides."
The study also highlights the value of transdisciplinary collaboration between ecology and social science: "The relationship established between biological and cultural diversity can help strengthen the resilience of indigenous communities in the face of global change."
abstract The knowledge of nonliterate societies may vanish in silence jeopardizing indigenous peoples’ livelihoods. Yet, this cultural component is missed by studies on ecosystem services that have historically emphasized the biological dimension. Here we fill this gap by introducing indigenous knowledge networks representing the wisdom of indigenous people on plant species and the services they provide. This approach allows us to assess how knowledge held by 57 Neotropical indigenous communities is structured locally and regionally, how it is influenced by turnover in biological and cultural heritage, and how the progressive loss of biocultural heritage may undermine the resilience of these communities.
Indigenous communities rely extensively on plants for food, shelter, and medicine. It is still unknown, however, to what degree their survival is jeopardized by the loss of either plant species or knowledge about their services. To fill this gap, here we introduce indigenous knowledge networks describing the wisdom of indigenous people on plant species and the services they provide. Our results across 57 Neotropical communities show that cultural heritage is as important as plants for preserving indigenous knowledge both locally and regionally. Indeed, knowledge networks collapse as fast when plant species are driven extinct as when cultural diffusion, either within or among communities, is lost. But it is the joint loss of plant species and knowledge that erodes these networks at a much higher rate. Our findings pave the road toward integrative policies that recognize more explicitly the inseparable links between cultural and biological heritage.
adaptive flexibility in category learning? young children exhibit smaller costs of selective attention than adults
blanco, n. j., & sloutsky, v. m. et al. 2019
http://psycnet.apa.org/doi/10.1037/dev0000777
abstract Previous research has shown that when learning categories, adults and young children allocate attention differently. Adults tend to attend selectively, focusing primarily on the most relevant information, whereas young children tend to distribute their attention broadly. Although selective attention is useful in many situations, it also has costs. In addition to ignoring information that may turn out to be useful later, selective attention can have long-term costs, such as learned inattention—ignoring formerly irrelevant sources of information in novel situations. In 2 reported experiments, adults and 4-year-old children completed a category learning task in which an unannounced shift occurred such that information that was most relevant became irrelevant, whereas formerly irrelevant information became relevant. Costs stemming from this shift were assessed. The results indicate that adults exhibit greater costs due to learned inattention than young children. Distributing attention may be adaptive in young children, making them flexible to changing contingencies in the world and facilitating broad information gathering, both of which are useful when general knowledge about the environment is limited.
Researchers surprised adults and 4- and 5-year-old children participating in the study by making information that was irrelevant at the beginning of the experiment suddenly important for a task they had to complete.
"Adults had a hard time readjusting because they didn't learn the information they thought wouldn't be important," said Vladimir Sloutsky, co-author of the study and professor of psychology at The Ohio State University.
"Children, on the other hand, recovered quickly to the new circumstances because they weren't ignoring anything. I'm sure a lot of parents will recognize that tendency of children to notice everything, even when you wish they wouldn't."
Sloutsky conducted the study with Nathaniel Blanco, a postdoctoral researcher in psychology at Ohio State. Their research was published online in the journal Developmental Psychology and will appear in a future print edition.
The results show that children tend to distribute their attention broadly, while adults use selective attention to focus on information they believe is most important, Sloutsky said.
"Distributing attention may be adaptive for young children. By being attentive to everything, they gather more information which helps them learn more," Blanco said.
In one study, the researchers had 34 adults and 36 4-year-old children take part in a learning task.
They were presented with colorful images of "alien" creatures on a computer that had seven identifiable features, including antennae, head and tail.
Participants were told there were two types of creatures, called Flurps and Jalets, and that they had to figure out which ones were which.
One feature was always different on Flurps and Jalets -- for example, the Jalets may have a blue tail and the Flurps an orange tail. In addition, the children and adults were told that most (but not all) of the Flurps had a certain type of feature, such as pink antennae.
One of the features was never mentioned in the instructions and it did not differ between the types of creatures. This was what the researchers called the "irrelevant feature."
After training, participants were shown a series of images of the creatures on the computer screen and indicated whether each one was a Flurp or a Jalet.
But halfway through the experiment, the researchers made an unannounced switch: The irrelevant feature became the feature that would determine whether the creature was a Flurp or a Jalet. This feature, which had been the same for both creatures before the switch, was now different.
After the shift, the adults were more confused than the children were -- they were less likely to learn the importance of the new feature.
In contrast, children were quick to realize that the formerly irrelevant feature was now the feature that would always reveal the difference between Flurps and Jalets.
Adults tried to use the probabilistic rules (such as "most of the Flurps have pink antennae") to guide their choices after the shift.
In this study, adults suffered from "learned inattention," Blanco said. They didn't pay attention to the formerly irrelevant feature because they believed it wouldn't be important.
Children as young as those in this study often have difficulty focusing attention in the way that the adults did, Sloutsky said.
"The immediate reason is the immaturity of their pre-frontal cortex," he said. "But we believe that distributing attention broadly also helps them learn more."
Sloutsky emphasized that adults have no problem distributing attention broadly if necessary. But in many tasks that adults do every day, selective attention is helpful.
"It is clear that for optimal performance at most jobs, selective attention is necessary. But distributed attention might be useful when you're learning something new and need to see everything that is going on."
could alternating the strategies produce a parrondo paradox?
deschooling society
ivan illich 1970
web.archive.org/web/20120119011357/http://ournature.org/~novembre/illich/1970_deschooling.html
web.archive.org/web/20120119011357/http://ournature.org/~novembre/illich/
how to be a straight-a student
cal newport 2007 9780767927192
cognitive productivity: using knowledge to become profoundly effective
luc p. beaudoin 2013
cognitive productivity with macos: 7 principles for getting smarter with knowledge
luc p. beaudoin 2020
what intelligence tests miss: the psychology of rational thought
keith stanovich 2009 not yet read
the relationship cure: a 5 step guide to strengthening your marriage, family, and friendships
john m. gottman 2001 not yet read
the seven principles for making marriage work: a practical guide from the country’s foremost relationship expert
john m. gottman 1999 not yet read
the lean startup: how today’s entrepreneurs use continuous innovation to create radically successful businesses
eric ries 2011 not yet read
the computer revolution in philosophy: philosophy, science and models of mind
aaron sloman 1978 not yet read
the practicing mind: bringing discipline and focus into your life
thomas m. sterner 2006 not yet read
deep work: rules for focused success in a distracted world
cal newport 2016
rapt: attention and the focused life
winifred gallagher 2009 not yet read
all things shining: reading the western classics to find meaning in a secular age
hubert dreyfus, sean kelly 2010 not yet read
so good they can’t ignore you: why skills trump passion in the quest for work you love
cal newport 2012 not yet read
digital minimalism: on living better with less technology
cal newport 2019 not yet read
mind wandering
aeon.co/essays/are-you-sleepwalking-now-what-we-know-about-mind-wandering
measuring actual learning versus feeling of learning in response to being actively engaged in the classroom
louis deslauriers et al. 2019
doi.org/10.1073/pnas.1821936116
though students felt like they learned more through traditional lectures, they actually learned more when taking part in active learning classrooms.
The lead author Louis Deslauriers, Director of Science Teaching and Learning and senior Physics preceptor, knew that students would learn more from active learning. He published a key study in Science in 2011 that showed just that. But many students and faculty remained hesitant to switch to active learning.
"Often, students seemed genuinely to prefer smooth-as-silk traditional lectures," Deslauriers said. "We wanted to take them at their word. Perhaps they actually felt like they learned more from lectures than they did from active learning."
In addition to Deslauriers, the study is authored by Director of Science Education and Lecturer on Physics Logan McCarty, senior preceptor in Applied Physics Kelly Miller, preceptor in Physics Greg Kestin, and Kristina Callaghan, now a lecturer in Physics at the University of California, Merced.
The question of whether students' perceptions of their learning matches with their actual learning is particularly important, Deslauriers said, because though students eventually see the value of active learning, it can initially feel frustrating.
"Deep learning is hard work. The effort involved in active learning can be misinterpreted as a sign of poor learning," he said. "On the other hand, a superstar lecturer can explain things in such a way as to make students feel like they are learning more than they actually are."
To understand that dichotomy, Deslauriers and his co-authors designed an experiment that would expose students in an introductory physics class to both traditional lectures and active learning.
For the first 11 weeks of the 15-week class, students were taught using standard methods by an experienced instructor. In the 12th week, though, things changed -- half the class was randomly assigned to a classroom that used active learning, while the other half attended highly polished lectures. In a subsequent class, the two groups were reversed. Notably, both groups used identical class content and only active engagement with the material was toggled on and off.
Following each class, students were surveyed on how much they agreed or disagreed with statements like "I feel like I learned a lot from this lecture" and "I wish all my physics courses were taught this way." Students were also tested on how much they learned in the class with 12 multiple choice questions.
When the results were tallied, the authors found that students felt like they learned more from the lectures, but in fact scored higher on tests following the active learning sessions.
"Actual learning and feeling of learning were strongly anticorrelated," Deslauriers said, "as shown through the robust statistical analysis by co-author Kelly Miller, who is an expert in educational statistics and active learning."
But those results, the study authors warned, shouldn't be interpreted as suggesting students dislike active learning. In fact many studies have shown students quickly warm to the idea, once they begin to see the results.
"In all the courses at Harvard that we've transformed to active learning," Deslauriers said, "the overall course evaluations went up."
"It can be tempting to engage the class simply by folding lectures into a compelling 'story,' especially when that's what students seem to like," said Kestin, a co-author of the study, who is a physicist and a video producer with NOVA | PBS. "I show my students the data from this study on the first day of class to help them appreciate the importance of their own involvement in active learning."
McCarty, who oversees curricular efforts across the sciences, hopes this study will encourage more faculty colleagues to use active learning in their courses.
"We want to make sure that other instructors are thinking hard about the way they're teaching," he said. "In our classes, we start each topic by asking students to gather in small groups to solve some problems. While they work, we walk around the room to observe them and answer questions. Then we come together and give a short lecture targeted specifically at the misconceptions and struggles we saw during the problem-solving activity. So far we've transformed over a dozen classes to use this kind of active learning approach. It's extremely efficient -- we can cover just as much material as we would using lectures."
A pioneer in work on active learning, Professor of Physics Eric Mazur hailed the study as debunking long-held beliefs about how students learn.
"This work unambiguously debunks the illusion of learning from lectures," he said. "It also explains why instructors and students cling to the belief that listening to lectures constitutes learning. I recommend every lecturer reads this article."
The work also earned accolades from Dean of Science Christopher Stubbs, Professor of Physics and of Astronomy, who was an early convert to this style of active learning.
"When I first switched to teaching using active learning, some students resisted that change," he said. "This research confirms that faculty should persist and encourage active learning. Active engagement in every classroom, led by our incredible science faculty, should be the hallmark of residential undergraduate education at Harvard."
Ultimately, Deslauriers said, the study shows that it's important to ensure that both instructors and students aren't fooled into thinking that lectures -- even well-presented ones -- are the best learning option.
"A great lecture can get students to feel like they are learning a lot," he said. "Students might give fabulous evaluations to an amazing lecturer based on this feeling of learning, even though their actual learning isn't optimal. This could help to explain why study after study shows that student evaluations seem to be completely uncorrelated with actual learning."
abstract Despite active learning being recognized as a superior method of instruction in the classroom, a major recent survey found that most college STEM instructors still choose traditional teaching methods. This article addresses the long-standing question of why students and faculty remain resistant to active learning. Comparing passive lectures with active learning using a randomized experimental approach and identical course materials, we find that students in the active classroom learn more, but they feel like they learn less. We show that this negative correlation is caused in part by the increased cognitive effort required during active learning. Faculty who adopt active learning are encouraged to intervene and address this misperception, and we describe a successful example of such an intervention.
We compared students’ self-reported perception of learning with their actual learning under controlled conditions in large-enrollment introductory college physics courses taught using 1) active instruction (following best practices in the discipline) and 2) passive instruction (lectures by experienced and highly rated instructors). Both groups received identical class content and handouts, students were randomly assigned, and the instructor made no effort to persuade students of the benefit of either method. Students in active classrooms learned more (as would be expected based on prior research), but their perception of learning, while positive, was lower than that of their peers in passive environments. This suggests that attempts to evaluate instruction based on students’ perceptions of learning could inadvertently promote inferior (passive) pedagogical methods. For instance, a superstar lecturer could create such a positive feeling of learning that students would choose those lectures over active learning. Most importantly, these results suggest that when students experience the increased cognitive effort associated with active learning, they initially take that effort to signify poorer learning. That disconnect may have a detrimental effect on students’ motivation, engagement, and ability to self-regulate their own learning. Although students can, on their own, discover the increased value of being actively engaged during a semester-long course, their learning may be impaired during the initial part of the course. We discuss strategies that instructors can use, early in the semester, to improve students’ response to being actively engaged in the classroom.
improved learning in a large-enrollment physics class
louis deslauriers et al. 2011
doi.org/10.1126/science.1201783
It is not the intention of this paper to review the Hawthorne effect and its history, but we comment on it only because this is such a frequent question raised about this work. It is not plausible that it resulted in a significant impact on the results reported here. As discussed extensively in (S1-S3), analyses of the methodology and data used in the original Hawthorne plant studies reveal both serious flaws in the methodology, and an absence of statistically significant data supporting the existence of the claimed effect. Thus, the failure to replicate such an effect in an educational setting, as reported in (S4), is not surprising.
respiration modulates olfactory memory consolidation in humans
artin arshamian et al. 2018
doi.org/10.1523/jneurosci.3360-17.2018
"Our study shows that we remember smells better if we breathe through the nose when the memory is being consolidated -- the process that takes place between learning and memory retrieval," says Artin Arshamian, researcher at the Department of Clinical Neuroscience, Karolinska Institutet. "This is the first time someone has demonstrated this."
One reason why this phenomenon has not previously been available for study is that the most common laboratory animals -- rats and mice -- cannot breathe naturally through their mouths.
For the study, the researchers had participants learn twelve different smells on two separate occasions. They were then asked to either breathe through their noses or mouths for one hour. When the time was up, the participants were presented with the old as well as a new set of twelve smells, and asked to say if each one was from the learning session or new.
The results showed that when the participants breathed through their noses between the time of learning and recognition, they remembered the smells better.
New method facilitates measuring activity in the brain
"The next step is to measure what actually happens in the brain during breathing and how this is linked to memory," says Dr Arshamian. "This was previously a practical impossibility as electrodes had to be inserted directly into the brain. We've managed to get round this problem and now we're developing, with my colleague Johan Lundström, a new means of measuring activity in the olfactory bulb and brain without having to insert electrodes."
Earlier research has shown that the receptors in the olfactory bulb detect not only smells but also variations in the airflow itself. In the different phases of inhalation and exhalation, different parts of the brain are activated. But how the synchronisation of breathing and brain activity happens and how it affects the brain and therefore our behaviour is unknown. Traditional medicine has often, however, stressed the importance of breathing.
"The idea that breathing affects our behaviour is actually not new," says Dr Arshamian. "In fact, the knowledge has been around for thousands of years in such areas as meditation. But no one has managed to prove scientifically what actually goes on in the brain. We now have tools that can reveal new clinical knowledge."
abstract In mammals respiratory-locked hippocampal rhythms are implicated in the scaffolding and transfer of information between sensory and memory networks. These oscillations are entrained by nasal respiration and driven by the olfactory bulb. They then travel to the piriform cortex where they propagate further downstream to the hippocampus and modulate neural processes critical for memory formation. In humans, bypassing nasal airflow through mouth-breathing abolishes these rhythms and impacts encoding as well as recognition processes thereby reducing memory performance. It has been hypothesized that similar behavior should be observed for the consolidation process, the stage between encoding and recognition, were memory is reactivated and strengthened. However, direct evidence for such an effect is lacking in human and nonhuman animals. Here we tested this hypothesis by examining the effect of respiration on consolidation of episodic odor memory. In two separate sessions, female and male participants encoded odors followed by a 1 h awake resting consolidation phase where they either breathed solely through their nose or mouth. Immediately after the consolidation phase, memory for odors was tested. Recognition memory significantly increased during nasal respiration compared with mouth respiration during consolidation. These results provide the first evidence that respiration directly impacts consolidation of episodic events, and lends further support to the notion that core cognitive functions are modulated by the respiratory cycle.
SIGNIFICANCE STATEMENT Memories pass through three main stages in their development: encoding, consolidation, and retrieval. Growing evidence from animal and human studies suggests that respiration plays an important role in the behavioral and neural mechanisms associated with encoding and recognition. Specifically nasal, but not mouth, respiration entrains neural oscillations that enhance encoding and recognition processes. We demonstrate that respiration also affects the consolidation stage. Breathing through the nose compared with the mouth during consolidation enhances recognition memory. This demonstrates, first, that nasal respiration is important during the critical period were memories are reactivated and strengthened. Second, it suggests that the neural mechanisms responsible may emerge from nasal respiration.
out of our minds
ken robinson 2001, 2011
9781907312472
learning style myth
learning styles: concepts and evidence
hal pashler et al. 2008
journals.sagepub.com/doi/10.1111/j.1539-6053.2009.01038.x
dispelling the myth: training in education or neuroscience decreases but does not eliminate beliefs in neuromyths
kelly macdonald et al. 2017
doi.org/10.3389/fpsyg.2017.01314
when intuitive conceptions overshadow pedagogical content knowledge: teachers’ conceptions of students’ arithmetic word problem solving strategies
katarina gvozdic, emmanuel sander 2018
doi.org/10.1007/s10649-018-9806-7
maybe they’re born with it, or maybe it’s experience: toward a deeper understanding of the learning style myth
shaylene nancekivell et al. 2019
doi.org/10.1037/edu0000366
In two online experiments with 668 participants, more than 90 percent of them believed people learn better if they are taught in their predominant learning style, whether that is visual, auditory or tactile. But those who believed in learning styles split evenly into an "essentialist" group, with more strongly held beliefs, and a "non-essentialist" group, with more flexible beliefs about learning styles, said lead researcher Shaylene Nancekivell, PhD, a visiting scholar at the University of Michigan.
"We found that some people are more likely to believe that students inherit their learning style from their parents and that learning styles affect brain function," she said. "We also found that educators who work with younger children are more likely to hold this essentialist view. Many parents and educators may be wasting time and money on products, services and teaching methods that are geared toward learning styles."
In their responses to survey questions, the essentialist group members were more likely to state that learning styles are heritable, instantiated in the brain, don't change with age, mark distinct kinds of people, and predict both academic and career success. The non-essentialist group held looser beliefs about learning styles, viewing them as malleable, overlapping and more determined by environmental factors. The research was published online in the Journal of Educational Psychology.
Psychological essentialism is the belief that certain categories of people have a true nature that is biologically based and highly predictive of many factors in their lives. People with essentialist opinions about learning styles may be more resistant to changing their strongly held views even when they learn that numerous studies have debunked the concept of learning styles, Nancekivell said.
Previous research has shown that the learning styles model can undermine education in many ways. Educators spend time and money tailoring lessons to certain learning styles for different students even though all students would benefit from learning through various methods. Students study in ways that match their perceived learning style even though it won't help them succeed. Some teacher certification programs incorporate learning styles into their courses, which perpetuates the myth for the next generation of teachers. Academic support centers and a plethora of products also are focused on learning styles, despite the lack of scientific evidence supporting them.
The first experiment included participants from the general U.S. workforce, including educators. The second experiment was weighted so that at least half of the participants were educators to provide a better understanding of their views. The small percentage of participants who didn't believe in learning styles weren't included in the analysis because the study was examining differing beliefs about learning styles. Demographic factors such as race, gender, parental status and income level didn't affect people's views on learning styles in the study, but educators of young children were more likely to have essentialist beliefs.
"My biggest concern is that time is being spent teaching young children maladaptive strategies for learning," Nancekivell said. "It is important that children from a very young age are taught with the best practices so they will succeed."
Previous surveys in the United States and other industrialized countries across the world have shown that 80% to 95% of people believe in learning styles. It's difficult to say how that myth became so widespread, Nancekivell said.
"It seems likely that the appeal of the learning styles myth rests in its fit with the way people like to think about behavior," she said. "People prefer brain-based accounts of behavior, and they like to categorize people into types. Learning styles allow people to do both of those things."
abstract Decades of research suggest that learning styles, or the belief that people learn better when they receive instruction in their dominant way of learning, may be one of the most pervasive myths about cognition. Nonetheless, little is known about what it means to believe in learning styles. The present investigation uses one theoretical framework—psychological essentialism—to explore the content and consistency of people’s learning style beliefs. Psychological essentialism is a belief that certain categories (such as dogs, girls, or visual learners) have an underlying reality or true nature that is biologically based and highly predictive of many other features (Gelman, 2003). We tested the prevalence of erroneous essentialist beliefs regarding learning styles in both educators and noneducators, including that learning styles are innate, unchanging, discrete, and wired into the brain. In each of two experiments, we identified two groups of learning style believers, with one group holding an essentialist interpretation of learning styles, and the other group holding a nonessentialist interpretation of learning styles. No differences were found between educators’ and noneducators’ beliefs. In fact, only one factor was found to be a significant predictor of learning style beliefs for educators: the age of the population with whom they work. Specifically, those who worked with younger children were more likely to interpret learning styles in an essentialist way. Together the findings demonstrate that learning style beliefs are far more complex and variable than previously recognized.
n-back versus complex span working memory training
kara j. blacker et al. 2017
doi.org/10.1007/s41465-017-0044-1
complex span and n-back measures of working memory: a meta-analysis
thomas s. redick, dakota r. b. lindsey 2013
doi.org/10.3758/s13423-013-0453-9
cerebellar granule cells acquire a widespread predictive feedback signal during motor learning
andrea giovannucci et al. 2017
doi.org/10.1038/nn.4531
learning as supervised trying
fred dufour 2016
the eighty five percent rule for optimal learning
robert c. wilson et al. 2019
doi.org/10.1038/s41467-019-12552-4
when failure occurs 15% of the time. Put another way, it's when the right answer is given 85% of the time.
"These ideas that were out there in the education field - that there is this 'zone of proximal difficulty,' in which you ought to be maximizing your learning - we've put that on a mathematical footing," said UArizona assistant professor of psychology and cognitive science Robert Wilson, lead author of the study, titled "The Eighty Five Percent Rule for Optimal Learning."
Wilson and his collaborators at Brown University, the University of California, Los Angeles and Princeton came up with the so-called "85% Rule" after conducting a series of machine-learning experiments in which they taught computers simple tasks, such as classifying different patterns into one of two categories or classifying photographs of handwritten digits as odd versus even numbers, or low versus high numbers.
The computers learned fastest in situations in which the difficulty was such that they responded with 85% accuracy.
"If you have an error rate of 15% or accuracy of 85%, you are always maximizing your rate of learning in these two-choice tasks," Wilson said.
When researchers looked at previous studies of animal learning, they found that the 85% Rule held true in those instances as well, Wilson said.
When we think about how humans learn, the 85% Rule would mostly likely apply to perceptual learning, in which we gradually learn through experience and examples, Wilson said. Imagine, for instance, a radiologist learning to tell the difference between images of tumors and non-tumors.
"You get better at figuring out there's a tumor in an image over time, and you need experience and you need examples to get better," Wilson said. "I can imagine giving easy examples and giving difficult examples and giving intermediate examples. If I give really easy examples, you get 100% right all the time and there's nothing left to learn. If I give really hard examples, you'll be 50% correct and still not learning anything new, whereas if I give you something in between, you can be at this sweet spot where you are getting the most information from each particular example."
Since Wilson and his collaborators were looking only at simple tasks in which there was a clear correct and incorrect answer, Wilson won't go so far as to say that students should aim for a B average in school. However, he does think there might be some lessons for education that are worth further exploration.
"If you are taking classes that are too easy and acing them all the time, then you probably aren't getting as much out of a class as someone who's struggling but managing to keep up," he said. "The hope is we can expand this work and start to talk about more complicated forms of learning."
abstract Researchers and educators have long wrestled with the question of how best to teach their clients be they humans, non-human animals or machines. Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this ‘Eighty Five Percent Rule’ for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning.
montessori preschool elevates and equalizes child outcomes: a longitudinal study
angeline s. lillard et al. 2017
doi.org/10.3389/fpsyg.2017.01783
resources, learning, and policy: the relative effects of social and financial capital on student learning in schools
serena j. salloum et al. 2018
doi.org/10.1080/10824669.2018.1496023
Social capital is the name scientists give to the network of relationships between school officials, teachers, parents and the community that builds trust and norms promoting academic achievement.
The study found that social capital had a three- to five-times larger effect than financial capital on reading and math scores in Michigan schools.
“When we talk about why some schools perform better than others, differences in the amount of money they have to spend is often assumed to be an explanation,” said Roger Goddard, co-author of the study and Novice G. Fawcett Chair and professor of educational administration at The Ohio State University.
“We found that money is certainly important. But this study also shows that social capital deserves a larger role in our thinking about cost-effective ways to support students, especially the most vulnerable.”
Goddard conducted the research with Serena Salloum of Ball State University and Dan Berebitsky of Southern Methodist University. The study appears online in the Journal of Education for Students Placed at Risk and will be published in a future print edition.
The study involved 5,003 students and their teachers in 78 randomly selected public elementary schools in Michigan. The sample is representative of the demographics of all elementary schools in the state.
Teachers completed a questionnaire that measured levels of social capital in their schools. They rated how much they agreed with statements like “Parent involvement supports learning here,” “Teachers in this school trust their students” and “Community involvement facilitates learning here.”
State data on instructional expenditures per pupil was used to measure financial capital at each school.
Finally, the researchers used student performance on state-mandated fourth-grade reading and mathematics tests to measure student learning.
Results showed that on average schools that spent more money did have better test scores than those that spent less. But the effect of social capital was three times larger than financial capital on math scores and five times larger on reading scores.
“Social capital was not only more important to learning than instructional expenditures, but also more important than the schools’ poverty, ethnic makeup or prior achievement,” Goddard said.
While social capital tended to go down in schools as poverty levels increased, it wasn’t a major decrease.
“We could see from our data that more than half of the social capital that schools have access to has nothing to do with the level of poverty in the communities they serve,” he said.
“Our results really speak to the importance and the practicality of building social capital in high-poverty neighborhoods where they need it the most.”
The study also found that the money spent on student learning was not associated with levels of social capital in schools. That means schools can’t “buy” social capital just by spending more money. Social relationships require a different kind of investment, Goddard said.
The study can’t answer how to cultivate social capital in schools. But Goddard has some ideas.
One is for schools to do more to help teachers work together.
“Research shows that the more teachers collaborate, the more they work together on instructional improvement, the higher the test scores of their students. That’s because collaborative work builds social capital that provides students with access to valuable support,” he said.
Building connections to the community is important, too. School-based mentoring programs that connect children to adults in the community is one idea.
“Sustained interactions over time focused on children’s learning and effective teaching practice are the best way for people to build trust and build networks that are at the heart of social capital,” Goddard said.
“We need intentional effort by schools to build social capital. We can’t leave it to chance.”
abstract In this paper, we note the contrasting positions occupied by social and financial capital in state and federal education policy and compare their relative impacts on student learning. To make such a comparison, we analyzed data from a representative sample of Michigan’s elementary schools using multilevel structural equation modeling to examine the relationships among social capital, instructional expenditures, and student achievement. We found that the level of social capital characterizing schools was not a function of instructional expenditures. We also found that both social and financial capital had a positive and significant relationship with reading and mathematics student achievement. However, the effect of social capital was three and five times larger than that of financial capital on mathematics and reading, respectively. We discuss the implications of these findings for education policy and programs that might improve student learning by strengthening social relationships.
forebrain-specific, conditional silencing of staufen2 alters synaptic plasticity, learning, and memory in rats
stefan m. berger et al. 2017
doi.org/10.1186/s13059-017-1350-8
increased striatal activity in adolescence benefits learning
s. peters, e. a. crone
doi.org/10.1038/s41467-017-02174-z
the neuronal gene arc encodes a repurposed retrotransposon gag protein that mediates intercellular rna transfer
elissa d. pastuzyn et al. 2017
doi.org/10.1016/j.cell.2017.12.024
•The neuronal gene Arc encodes a protein that forms virus-like capsids
•Arc protein exhibits similar biochemical properties as retroviral Gag proteins
•Endogenous Arc protein is released from neurons in extracellular vesicles (EVs)
•Arc EVs and capsids can mediate intercellular transfer of Arc mRNA in neurons
The neuronal gene Arc is essential for long-lasting information storage in the mammalian brain, mediates various forms of synaptic plasticity, and has been implicated in neurodevelopmental disorders. However, little is known about Arc’s molecular function and evolutionary origins. Here, we show that Arc self-assembles into virus-like capsids that encapsulate RNA. Endogenous Arc protein is released from neurons in extracellular vesicles that mediate the transfer of Arc mRNA into new target cells, where it can undergo activity-dependent translation. Purified Arc capsids are endocytosed and are able to transfer Arc mRNA into the cytoplasm of neurons. These results show that Arc exhibits similar molecular properties to retroviral Gag proteins. Evolutionary analysis indicates that Arc is derived from a vertebrate lineage of Ty3/gypsy retrotransposons, which are also ancestors to retroviruses. These findings suggest that Gag retroelements have been repurposed during evolution to mediate intercellular communication in the nervous system.
black box thinking: why most people never learn from their mistakes, but some do
matthew syed 2015 9780698408876
advancing the science of collaborative problem solving
arthur c. graesser et al. 2018
doi.org/10.1177/1529100618808244
an interdisciplinary team of researchers identifies the essential cognitive and social components of collaborative problem solving (CPS) and shows how integrating existing knowledge from a variety of fields can lead to new ways of assessing and training these abilities.
"CPS is an essential skill in the workforce and the community because many of the problems faced in the modern world require teams to integrate group achievements with team members' idiosyncratic knowledge," the authors of the report say.
As societies and technologies become increasingly complex, they generate increasingly complex problems. Devising efficient, effective, and innovative solutions to these complex problems requires CPS skills that most students lack. According to a 2015 assessment of more than 500,000 15-year-old students conducted by the Organisation for Economic Cooperation and Development, only 8% of students around the world showed strong CPS skills.
"The experiences of students in and out of the classroom are not preparing them for these skills that are needed as adults," Graesser and colleagues write.
This unique set of cognitive and social skills include:
Shared understanding: Group members share common goals when solving a new problem.
Accountability: The contributions that each member makes are visible to the rest of the group.
Differentiated roles: Group members draw on their specific expertise to complete different tasks.
Interdependency: Group members depend on the contributions of others to solve the problem.
One reason for the lack of CPS training is a deficit in evidence-based standards and curricula. Secondary school curricula typically focus on educating task- and discipline-specific knowledge, placing little emphasis on educating students' ability to communicate and collaborate effectively.
"Students rarely receive meaningful instruction, modeling, and feedback on collaboration," the researchers note.
When students do receive training relevant to CPS, it is often because they participate in extracurricular activities such as band, sports, student newspapers, and volunteer activities. Even then, the collaborative competencies are not directly relevant to problem solving. The authors argue that it is time to make CPS activities a core part of the curriculum.
Although considerable psychological, educational, and management research has examined factors that contribute to effective learning, teamwork, and decision making, research that directly examines how to improve collaborative problem solving is scarce.
According to the authors, "we are nearly at ground zero in identifying pedagogical approaches to improving CPS skills."
Developing and implementing effective CPS training stands to have significant societal impacts across a wide range of domains, including business, science, education, technology, environment, and public health. In a project funded by the National Science Foundation, for example, Fiore and other research team members are training students to collaborate across a range of disciplines -- including environmental science, ecology, biology, law, and policy -- to identify ways to address social, business, and agricultural effects of rising sea levels in Virginia's Eastern Shore.
"It's exciting to engage in real world testing of methods developed in laboratory studies on teamwork, to see how feedback on collaboration, and reflection on that feedback to improve teamwork strategies, can improve students' problem solving," Fiore explains.
Identifying the necessary components of this kind of training and determining how to translate those components across a variety of real-world settings will, itself, require interdisciplinary cooperation among researchers, educators, and policymakers.
In the commentary, Gauvain emphasizes that achieving a comprehensive understanding of CPS requires taking a developmental perspective and she notes that psychological scientists will be essential in this endeavor. Graesser and colleagues agree:
"When psychological scientists collaborate with educational researchers, computer scientists, psychometricians, and educational experts, we hope to move forward in addressing this global deficit in CPS," they conclude.
abstract Collaborative problem solving (CPS) has been receiving increasing international attention because much of the complex work in the modern world is performed by teams. However, systematic education and training on CPS is lacking for those entering and participating in the workforce. In 2015, the Programme for International Student Assessment (PISA), a global test of educational progress, documented the low levels of proficiency in CPS. This result not only underscores a significant societal need but also presents an important opportunity for psychological scientists to develop, adopt, and implement theory and empirical research on CPS and to work with educators and policy experts to improve training in CPS. This article offers some directions for psychological science to participate in the growing attention to CPS throughout the world. First, it identifies the existing theoretical frameworks and empirical research that focus on CPS. Second, it provides examples of how recent technologies can automate analyses of CPS processes and assessments so that substantially larger data sets can be analyzed and so students can receive immediate feedback on their CPS performance. Third, it identifies some challenges, debates, and uncertainties in creating an infrastructure for research, education, and training in CPS. CPS education and assessment are expected to improve when supported by larger data sets and theoretical frameworks that are informed by psychological science. This will require interdisciplinary efforts that include expertise in psychological science, education, assessment, intelligent digital technologies, and policy.
modulating cellular cytotoxicity and phototoxicity of fluorescent organic salts through counterion pairing
deanna broadwater et al. 2019
doi.org/10.1038/s41598-019-51593-z
interesting story of how this paper came to be
"This work has the potential to transform fluorescent probes for broad societal impact through applications ranging from biomedicine to photocatalysis -- the acceleration of chemical reactions with light," he said. "Our solar research inspired this cancer project, and in turn, focusing on cancer cells has advanced our solar cell research; it's been an amazing feedback loop."
Prior to the Lunts' combined effort, fluorescent dyes used for therapeutics and diagnostics, aka "theranostics," had shortcomings, such as low brightness, high toxicity to cells, poor tissue penetration and unwanted side effects.
By optoelectronically tuning organic salt nanoparticles used as theranostics, the Lunts were able to control them in a range of cancer studies. Coaxing the nanoparticles into the nontoxic zone resulted in enhanced imaging, while pushing them into the phototoxic -- or light-activated -- range produced effective on-site tumor treatment.
The key was learning to control the electronics of their photoactive molecules independently from their optical properties and then making the leap to apply this understanding in a new way to a seemingly unrelated field.
Richard had recently discovered the ability to electronically tune these salts from his work in converting photovoltaics into solar glass. Sophia had long studied metabolic pathways unique to cancer cells. It was when the Lunts were discussing solar glass during a walk that they made the connection: Molecules active in the solar cells might also be used to more effectively target and kill cancer cells.
A journey of 1,000 miles
Their walks had rather unscientific beginnings. Shortly after the Lunts met at Princeton University, Richard moved to another university. To maintain their long-distance relationship, they scheduled daily phone calls. Upon their arrival at MSU, individual academic career demands replaced geographic distance as a challenge to their busy lives.
To connect daily, they take CEO-style walks together every evening. The two-mile saunters take place rain or shine, and they often engage in scientific discussions. The three keys to their walks are intentional curiosity, perseverance and the merging of different fields and perspectives, Sophia said.
"We talk science, strategic plans for our careers and our various grants," she said. "We ping ideas off each other. Our continual conversations brainstorming ideas on a particular topic or challenge often lead to those exciting 'aha' moments."
Their walks have helped them push through many challenges.
"Our first experiments did not turn out as expected; I'm surprised that we didn't give up given how crazy the idea seemed at first," Richard said. "Figuring out how to do this research took many walks."
Obviously, the results were worth the hike. Today, Richard designs the molecules; Babak Borhan, MSU chemist, synthesizes and improves them; and Sophia tests their photoactive inventions in cancer cell lines and mouse models.
Future research will work to improve the theranostics' effectiveness, decrease toxicity and reduce side effects. The Lunts have applied for a patent for their work, and they're looking forward to eventually pushing their photoactive molecule findings through clinical trials.
"Though that will take many more walks," Richard said with a smile.
abstract Light-activated theranostics offer promising opportunities for disease diagnosis, image-guided surgery, and site-specific personalized therapy. However, current fluorescent dyes are limited by low brightness, high cytotoxicity, poor tissue penetration, and unwanted side effects. To overcome these limitations, we demonstrate a platform for optoelectronic tuning, which allows independent control of the optical properties from the electronic properties of fluorescent organic salts. This is achieved through cation-anion pairing of organic salts that can modulate the frontier molecular orbital without impacting the bandgap. Optoelectronic tuning enables decoupled control over the cytotoxicity and phototoxicity of fluorescent organic salts by selective generation of mitochondrial reactive oxygen species that control cell viability. We show that through counterion pairing, organic salt nanoparticles can be tuned to be either nontoxic for enhanced imaging, or phototoxic for improved photodynamic therapy.
dynamic salience processing in paraventricular thalamus gates associative learning
yingjie zhu et al. 2018
doi.org/10.1126/science.aat0481
the broad, ragged cut: aptitude and iq tests are used to distinguish those young people who deserve a chance from those who do not. do they work? are aptitude tests an accurate measure of human potential?
elizabeth svoboda 2018 also what makes a hero? 2013
aeon.co/essays/are-aptitude-tests-an-accurate-measure-of-human-potential
iq is largely a pseudoscientific swindle
nicholas taleb 2019
medium.com/incerto/iq-is-largely-a-pseudoscientific-swindle-f131c101ba39
metacognitive scaffolding boosts cognitive and neural benefits following executive attention training in children
joan paul pozuelos et al. 2018
doi.org/10.1111/desc.12756
In addition, the study shows that the beneficial effects of training in the brain and intelligence are greater when an educator implement a coaching strategy design in order to help the child understand their training process.
The training program has been developed by researchers from the UGR, and consists of exercises based on experimental paradigms that activates brain areas responsible for the regulation and control of attention. These are activities that are performed on the computer or tablet, and requires focusing attention and respond attentively to situations in which the dominant responses are not correct. Other exercises require keeping the instructions in memory and adapting to changing rules.
"The results of this research suggest that it is important to educate attention from early childhood," explains the lead author of this work, the researcher at the Experimental Psychology Department of the University of Granada María Rosario Rueda Cuerva.
In addition, they indicate that the most effective strategies are those in which the educator helps the child to reflect on his or her learning process. "Through the education of attention we can improve the intelligence of children and prepare them for formal learning in school," says the researcher.
abstract Interventions including social scaffolding and metacognitive strategies have been used in educational settings to promote cognition. In addition, increasing evidence shows that computerized process‐based training enhances cognitive skills. However, no prior studies have examined the effect of combining these two training strategies. The goal of this study was to test the combined effect of metacognitive scaffolding and computer‐based training of executive attention in a sample of typically developing preschoolers at the cognitive and brain levels. Compared to children in the regular training protocol and an untrained active control group, children in the metacognitive group showed larger gains on intelligence and significant increases on an electrophysiological index associated with conflict processing. Moreover, changes in the conflict‐related brain activity predicted gains in intelligence in the metacognitive scaffolding group. These results suggest that metacognitive scaffolding boosts the influence of process‐based training on cognitive efficiency and brain plasticity related to executive attention.
predictions as a window into learning: anticipatory fixation offsets carry more information about environmental statistics than reactive stimulus-responses
giuseppe notaro et al. 2019
doi.org/10.1167/19.2.8
many factors intervene between what people actually know, and how they respond based on that knowledge. These factors are known to include the level of attention allocated to external stimuli, the interpretation of sensory information, as well as complex decision processes. All these factors impact how people respond to stimuli they see. However, these factors do not impact anticipatory movements, and the authors examined whether it is possible to determine what people know, and how they learn, by studying only anticipatory behaviors. Their findings suggest that anticipatory behavior may in certain cases be more informative about learning. These results open future developments for studying learning in populations that demonstrate non-reliable responses to external stimulus, such as young children or individuals suffering from physical or mental disorders, for whom it may be more difficult to assess the degree of attention and understanding.
The study
How is possible to know if one is learning and assimilating useful information? The novel answer suggested by the study is that it is useful to observe preparatory, unconscious eye movements, because these offer a window into the process of learning. In the experiment performed, the researchers collected data using an eye-tracker, a device that allows to measure our gaze direction. "We presented volunteers with series of images that were presented on the left or right side of the screen, according to a specific, learnable pattern," Notaro explains: "We observed how fast people were to look at those images, depending on whether the images were presented at an unexpected or expected location. Surprisingly, the position of the eyes before the next image was presented was biased towards the expected position of the next image. This was a very small eye movement that turned out to be highly informative. It let us infer that the brain can prepare in advance, once an information is learnt. This allows to isolate a mental state that holds before obtaining the 'usual' responses that are triggered by external stimuli (such as verbal answers or button presses)."
Application scenarios
These findings open potential application scenarios, including in the areas of education and health, and particularly in relation to populations with attention or communication deficits. "The presence of these anticipatory signals -- Hasson adds -- provides us with an opportunity to measure attentive capacity and learning with greater precision. These are tiny signals that very likely reflect unconscious processes. At the same time, they are highly reliable, and allow us to forecast how participants may respond to stimuli. The inter-relations of learning and prediction is a topic that arouses much interest transversely, not only in the scientific community, and it touches on areas that may be very close to our daily life. For example, there are already large investments made today in areas such as online advertisement and entertainment that aim to study people's eye movements and how those forecast behavior and memory. Identifying an anticipatory signature in eye movements can advance those areas as well."
abstract A core question underlying neurobiological and computational models of behavior is how individuals learn environmental statistics and use them to make predictions. Most investigations of this issue have relied on reactive paradigms, in which inferences about predictive processes are derived by modeling responses to stimuli that vary in likelihood. Here we deployed a novel anticipatory oculomotor metric to determine how input statistics impact anticipatory behavior that is decoupled from target-driven-response. We implemented transition constraints between target locations, so that the probability of a target being presented on the same side as the previous trial was 70% in one condition (pret70) and 30% in the other (pret30). Rather than focus on responses to targets, we studied subtle endogenous anticipatory fixation offsets (AFOs) measured while participants fixated the screen center, awaiting a target. These AFOs were small (<0.4° from center on average), but strongly tracked global-level statistics. Speaking to learning dynamics, trial-by-trial fluctuations in AFO were well-described by a learning model, which identified a lower learning rate in pret70 than pret30, corroborating prior suggestions that pret70 is subjectively treated as more regular. Most importantly, direct comparisons with saccade latencies revealed that AFOs: (a) reflected similar temporal integration windows, (b) carried more information about the statistical context than did saccade latencies, and (c) accounted for most of the information that saccade latencies also contained about inputs statistics. Our work demonstrates how strictly predictive processes reflect learning dynamics, and presents a new direction for studying learning and prediction.
regulation of arousal via online neurofeedback improves human performance in a demanding sensory-motor task
josef faller et al. 2019
doi.org/10.1073/pnas.1817207116
online neurofeedback to modify an individual's arousal state to improve performance in a demanding sensory motor task, such as flying a plane or driving in suboptimal conditions. The researchers used a brain computer interface (BCI) to monitor, through electroencephalography (EEG) in real time, the arousal states of the study participants when they were engaged in a virtual reality aerial navigation task. The system generated a neurofeedback signal that helped participants to decrease their arousal in particularly difficult flight situations, which in turn improved participants' performance. The study was published today by Proceedings of the National Academy of Sciences.
"The whole question of how you can get into the zone, whether you're a baseball hitter or a stock trader or a fighter pilot, has always been an intriguing one," says Paul Sajda, professor of biomedical engineering (BME), electrical engineering, and radiology, who led the study. "Our work shows that we can use feedback generated from our own brain activity to shift our arousal state in ways that significantly improve our performance in difficult tasks -- so we can hit that home run or land on a carrier deck without crashing."
The 20 subjects in the study were immersed in a virtual reality scenario in which they had to navigate a simulated airplane through rectangular boundaries. Known as a boundary avoidance task, this demanding sensory-motor task model created cognitive stresses, such as making the boxes narrower every 30 seconds, that escalated arousal and quickly resulted in task failure -- missing or crashing into the boundary. But when the researchers used neurofeedback, the subjects did better, were able to fly longer while performing the difficult tasks that required high levels of visual and motor coordination.
There were three feedback conditions (BCI, sham, and silence) randomly assigned for every new flight attempt. In the BCI condition, subjects heard the sound of a low-rate synthetic heartbeat that was continuously modulated in loudness as a function of the level of inferred task-dependent arousal, as decoded from the EEG. The higher that level of arousal, the louder the feedback and vice versa. Participants' task performance in the BCI condition, measured as time and distance over which the subject can navigate before failure, was increased by around 20 percent.
"Simultaneous measurements of pupil dilation and heart rate variability showed that the neurofeedback indeed reduced arousal, causing the subjects to remain calm and fly beyond the point at which they would normally fail," says Josef Faller, the study's lead author and a postdoctoral research scientist in BME. "Our work is the first demonstration of a BCI system that uses online neurofeedback to shift arousal state and improve task performance in accordance with the Yerkes-Dodson law."
The Yerkes-Dodson law is a well-established and intensively studied law in behavioral psychology about the relationship between arousal and performance. Developed in 1908, it posits an inverse-relationship between arousal and task performance, that there is a state of arousal that is optimal for behavioral performance in a given task. In this new study, the researchers showed that they could use neurofeedback in real time to move an individual's arousal from the right side of the Yerkes-Dodson curve to the left, toward a state of improved performance.
"What's exciting about our new approach is that it is applicable to different task domains," Sajda adds. "This includes clinical applications that use self-regulation as a targeted treatment, such as mental illness."
The researchers are now studying how neurofeedback can be used to regulate arousal and emotion for clinical conditions such as PTSD. They are also exploring how they might use online monitoring of arousal and cognitive control to inform human-agent teaming, when a robot and a human work together in a high-stress situation like a rescue. If the robot has information on the human's arousal state, it could choose its tasks in a way that reduces its teammate's arousal, pushing her/him into an ideal performance zone.
"Good human-agent teams, like the Navy SEALS, do this already, but that is because the human-agents can read facial expressions, voice patterns, etc., of their teammates to infer arousal and stress levels," Sajda says. "We envision our system being a better way to communicate not just this type of information, but much more to a robot-agent."
Our ability to make optimal decisions, judgments, and actions in real-world dynamic environments depends on our state of arousal. We show that we can use electroencephalography-based feedback to shift an individual’s arousal so that their task performance increases significantly. This work demonstrates a closed-loop brain–computer interface for dynamically shifting arousal to affect online task performance in accordance with the Yerkes and Dodson law. The approach is potentially applicable to different task domains and/or for clinical applications that utilize self-regulation as a targeted treatment, such as in mental illness.
Abstract
Our state of arousal can significantly affect our ability to make optimal decisions, judgments, and actions in real-world dynamic environments. The Yerkes–Dodson law, which posits an inverse-U relationship between arousal and task performance, suggests that there is a state of arousal that is optimal for behavioral performance in a given task. Here we show that we can use online neurofeedback to shift an individual’s arousal from the right side of the Yerkes–Dodson curve to the left toward a state of improved performance. Specifically, we use a brain–computer interface (BCI) that uses information in the EEG to generate a neurofeedback signal that dynamically adjusts an individual’s arousal state when they are engaged in a boundary-avoidance task (BAT). The BAT is a demanding sensory-motor task paradigm that we implement as an aerial navigation task in virtual reality and which creates cognitive conditions that escalate arousal and quickly results in task failure (e.g., missing or crashing into the boundary). We demonstrate that task performance, measured as time and distance over which the subject can navigate before failure, is significantly increased when veridical neurofeedback is provided. Simultaneous measurements of pupil dilation and heart-rate variability show that the neurofeedback indeed reduces arousal. Our work demonstrates a BCI system that uses online neurofeedback to shift arousal state and increase task performance in accordance with the Yerkes–Dodson law.
a rapid form of offline consolidation in skill learning
marlene bönstrup et al. 2019
doi.org/10.1016/j.cub.2019.02.049
resting, early and often, may be just as critical to learning as practice," said Leonardo G. Cohen, M.D., Ph.D., senior investigator at NIH's National Institute of Neurological Disorders and Stroke and a senior author of the paper published in the journal Current Biology. "Our ultimate hope is that the results of our experiments will help patients recover from the paralyzing effects caused by strokes and other neurological injuries by informing the strategies they use to 'relearn' lost skills."
The study was led by Marlene Bönstrup, M.D., a postdoctoral fellow in Dr. Cohen's lab. Like many scientists, she held the general belief that our brains needed long periods of rest, such as a good night's sleep, to strengthen the memories formed while practicing a newly learned skill. But after looking at brain waves recorded from healthy volunteers in learning and memory experiments at the NIH Clinical Center, she started to question the idea.
The waves were recorded from right-handed volunteers with a highly sensitive scanning technique called magnetoencephalography. The subjects sat in a chair facing a computer screen and under a long cone-shaped brain scanning cap. The experiment began when they were shown a series of numbers on a screen and asked to type the numbers as many times as possible with their left hands for 10 seconds; take a 10 second break; and then repeat this trial cycle of alternating practice and rest 35 more times. This strategy is typically used to reduce any complications that could arise from fatigue or other factors.
As expected, the volunteers' speed at which they correctly typed the numbers improved dramatically during the first few trials and then leveled off around the 11th cycle. When Dr. Bönstrup looked at the volunteers' brain waves she observed something interesting.
"I noticed that participants' brain waves seemed to change much more during the rest periods than during the typing sessions," said Dr. Bönstrup. "This gave me the idea to look much more closely for when learning was actually happening. Was it during practice or rest?"
By reanalyzing the data, she and her colleagues made two key findings. First, they found that the volunteers' performance improved primarily during the short rests, and not during typing. The improvements made during the rest periods added up to the overall gains the volunteers made that day. Moreover, these gains were much greater than the ones seen after the volunteers returned the next day to try again, suggesting that the early breaks played as critical a role in learning as the practicing itself.
Second, by looking at the brain waves, Dr. Bönstrup found activity patterns that suggested the volunteers' brains were consolidating, or solidifying, memories during the rest periods. Specifically, they found that the changes in the size of brain waves, called beta rhythms, correlated with the improvements the volunteers made during the rests.
Further analysis suggested that the changes in beta oscillations primarily happened in the right hemispheres of the volunteers' brains and along neural networks connecting the frontal and parietal lobes that are known to help control the planning of movements. These changes only happened during the breaks and were the only brain wave patterns that correlated with performance.
"Our results suggest that it may be important to optimize the timing and configuration of rest intervals when implementing rehabilitative treatments in stroke patients or when learning to play the piano in normal volunteers," said Dr. Cohen. "Whether these results apply to other forms of learning and memory formation remains an open question."
abstract •Temporal microscale of motor-skill learning reveals strong gains during rest periods
•Online motor-skill learning may rely largely on gains during short periods of rest
•Frontoparietal beta oscillatory activity predicts these micro-offline gains
•This rapid form of consolidation substantially contributes to early skill learning
The brain strengthens memories through consolidation, defined as resistance to interference (stabilization) or performance improvements between the end of a practice session and the beginning of the next (offline gains) 1. Typically, consolidation has been measured hours or days after the completion of training 2, but the same concept may apply to periods of rest that occur interspersed in a series of practice bouts within the same session. Here, we took an unprecedented close look at the within-seconds time course of early human procedural learning over alternating short periods of practice and rest that constitute a typical online training session. We found that performance did not markedly change over short periods of practice. On the other hand, performance improvements in between practice periods, when subjects were at rest, were significant and accounted for early procedural learning. These offline improvements were more prominent in early training trials when the learning curve was steep and no performance decrements during preceding practice periods were present. At the neural level, simultaneous magnetoencephalographic recordings showed an anatomically defined signature of this phenomenon. Beta-band brain oscillatory activity in a predominantly contralateral frontoparietal network predicted rest-period performance improvements. Consistent with its role in sensorimotor engagement 3, modulation of beta activity may reflect replay of task processes during rest periods. We report a rapid form of offline consolidation that substantially contributes to early skill learning and may extend the concept of consolidation to a time scale in the order of seconds, rather than the hours or days traditionally accepted.
dissociating task acquisition from expression during learning reveals latent knowledge
kishore v. kuchibhotla et al. 2019
doi.org/10.1038/s41467-019-10089-0
show a distinction between knowledge and performance, and provide insight into how environment can affect the two.
"Most learning research focuses on how humans and other animals learn 'content' or knowledge. Here, we suggest that there are two parallel learning processes: one for content and one for context, or environment. If we can separate how these two pathways work, perhaps we can find ways to improve performance," says Kishore Kuchibhotla, an assistant professor in The Johns Hopkins University's department of psychological and brain sciences and the study's lead author.
While researchers have known that the presence of reinforcement, or reward, can change how animals behave, it's been unclear exactly how rewards affect learning versus performance.
An example of the difference between learning and performance, Kuchibhotla explains, is the difference between a student studying and knowing the answers at home, and a student demonstrating that knowledge on a test at school.
"What we know at any given time can be different than what we show; the ability to access that knowledge in the right environment is what we're interested in," he says.
To investigate what animals know in hopes of better understanding learning, Kuchibhotla and the research team trained mice, rats and ferrets on a series of tasks, and measured how accurately they performed the tasks with and without rewards.
For the first experiment, the team trained mice to lick for water through a lick tube after hearing one tone, and to not lick after hearing a different, unrewarded tone. It takes mice two weeks to learn this in the presence of the water reward. At a time point early in learning, around days 3-5, the mice performed the task at chance levels (about 50%) when the lick tube/reward was present. When the team removed the lick tube entirely on these early days, however, the mice performed the task at more than 90% accuracy. The mice, therefore, seemed to understand the task many days before they expressed knowledge in the presence of a reward.
To confirm this finding with other tasks and animals, the team also had mice press a lever for water when they heard a certain tone; prompted rats to look for food in a cup if they heard a tone, but not if a light appeared before the tone; had rats press a lever for sugar water when a light was presented before a tone; had rats push lever for sugar water when they heard a certain tone, and prompted ferrets to differentiate between two different sounds for water. In all experiments, the animals performed better when rewards weren't available.
"Rewards, it seems, help improve learning incrementally, but can mask the knowledge animals have actually attained, particularly early in learning," says Kuchibhotla. Furthermore, the finding that all animals' performance improved across the board without rewards, suggest that variability in learning rates may be due to differences in the animals' sensitivity to reward context rather than differences in intelligence.
The dissociation between learning and performance, the researchers suggest, may someday help us isolate the root causes of poor performance. While the study involved only rodents and ferrets, Kuchibhotla says it may be possible to someday help animals and humans alike better access content when they need it if the right mechanisms within the brain can be identified and manipulated.
For humans, this could help those with Alzheimer's Disease maintain lucidity for longer periods of time and improve testing environments for schoolchildren.
abstract Performance on cognitive tasks during learning is used to measure knowledge, yet it remains controversial since such testing is susceptible to contextual factors. To what extent does performance during learning depend on the testing context, rather than underlying knowledge? We trained mice, rats and ferrets on a range of tasks to examine how testing context impacts the acquisition of knowledge versus its expression. We interleaved reinforced trials with probe trials in which we omitted reinforcement. Across tasks, each animal species performed remarkably better in probe trials during learning and inter-animal variability was strikingly reduced. Reinforcement feedback is thus critical for learning-related behavioral improvements but, paradoxically masks the expression of underlying knowledge. We capture these results with a network model in which learning occurs during reinforced trials while context modulates only the read-out parameters. Probing learning by omitting reinforcement thus uncovers latent knowledge and identifies context- not “smartness”- as the major source of individual variability.
object‐label‐order effect when learning from an inconsistent source
timmy ma, natalia l. komarova et al. 2019
doi.org/10.1111/cogs.12737
learn better when seeing an object before hearing its description. The study builds on past research by focusing on learning in "inconsistent" environments featuring different teaching styles or distracting noises.
"Understanding how the learning process occurs, and what factors affect it, may help instructors improve methodologies of education," said Timmy Ma, a research associate at Dartmouth.
Learning environments can often complicate the learning process. For example, a student taking a course with both a teacher and a teaching assistant needs to adapt to the ways the different instructors teach the same subject. Even the varying ways teachers talk and behave can complicate learning.
For the study, researchers intentionally provided confusing information to mimic these types of inconsistencies to subjects that were tasked to learn the names of three fictional characters -- "yosh," "wug" and "niz" -- using two types of learning methods.
The first method, "object-label learning," is when a student sees an object first and then is provided with the label. This means seeing a color before being told its name. Or hearing a description of a physical force before being hearing its formal title.
The second learning procedure is "label-object learning," the reverse order in which a student sees a label first.
Subjects in the study were asked to match the pictures of the characters with their made-up names. The presentation of information was intentionally misleading to see if learners have an easier time dealing with the inconsistency depending on the way the input was presented -- either object first or label first.
The results of the study indicate that students who see objects first and then hear the name -- object-label learners -- process inconsistent information better than learners who hear the name first and then see the object.
Researchers detected that learners that interact with the object before hearing the name perform "frequency boosting" -- the ability to process noisy, inconsistent information to identify and use the most frequent rule.
For example, when teachers interchangeably use "soda" or "pop" to describe the name of a carbonated beverage, the children who use frequency boosting will learn to use the term that is used most frequently.
A key feature of frequency boosting is that learners will also use the rule more consistently than the instructor.
"When trying to teach a child about colors, such as blue or red, not many people think about the best way to do it. People just say this is blue and point to an object. From this research, we can say that the order of presentation actually matters and that seeing the object first creates a stronger association to the name," said Ma who conducted the research while a PhD candidate at the University of California, Irvine.
The research team also used mathematical modeling to confirm the observations as well as provide a theoretical explanation as to why one type of learner is different from the other.
"This research combines experiments with a novel mathematical model to show that object-label learners deal better with inconsistencies. It's exciting to see that the math theory explains the observational data," said Ma.
According to the research team, understanding how people learn could have broad applications. For example, foreign language learning programs could benefit from showing images before introducing the name of an object. The results can also be applied to math, science or any other subjects where students need to make similar associations.
abstract Learning in natural environments is often characterized by a degree of inconsistency from an input. These inconsistencies occur, for example, when learning from more than one source, or when the presence of environmental noise distorts incoming information; as a result, the task faced by the learner becomes ambiguous. In this study, we investigate how learners handle such situations. We focus on the setting where a learner receives and processes a sequence of utterances to master associations between objects and their labels, where the source is inconsistent by design: It uses both “correct” and “incorrect” object‐label pairings. We hypothesize that depending on the order of presentation, the result of the learning may be different. To this end, we consider two types of symbolic learning procedures: the Object‐Label (OL) and the Label‐Object (LO) process. In the OL process, the learner is first exposed to the object, and then the label. In the LO process, this order is reversed. We perform experiments with human subjects, and also construct a computational model that is based on a nonlinear stochastic reinforcement learning algorithm. It is observed experimentally that OL learners are generally better at processing inconsistent input compared to LO learners. We show that the patterns observed in the learning experiments can be reproduced in the simulations if the model includes (a) an ability to regularize the input (and also to do the opposite, i.e., undermatch) and (b) an ability to take account of implicit negative evidence (i.e., interactions among different objects/labels). The model suggests that while both types of learners utilize implicit negative evidence in a similar way, there is a difference in regularization patterns: OL learners regularize the input, whereas LO learners undermatch. As a result, OL learners are able to form a more consistent system of image‐utterance associations, despite the ambiguous learning task.
ketamine improves short-term plasticity in depression by enhancing sensitivity to prediction errors
rachael l.sumner et al. 2020
doi.org/10.1016/j.euroneuro.2020.07.009
ketamine could reverse insensitivity to prediction error in depression.
In other words, the drug may help to alleviate depression by making it easier for patients to update their model of reality.
“Ketamine is exciting because of its potential to both treat, and better understand depression. This is largely because ketamine doesn’t work the way ordinary antidepressants do – its primary mechanism isn’t to increase monoamines in the brain like serotonin, and so ketamine gives us new insight into other potential mechanisms underlying depression,” said lead researcher Rachael Sumner, a postdoctoral research fellow at The University of Auckland School of Pharmacy.
“One of the major candidates for the mechanisms underlying ketamine’s antidepressant properties is how it increases neural plasticity. Neural plasticity is the brain’s ability to form new connections between neurons and ultimately underlies learning and memory in the brain.”
“Rodent studies have consistently shown that ketamine increases neural plasticity within 24 hours,” Sumner said. “However, there are major challenges when attempting to translate what we know occurs in rodents to determine if it occurs in humans. Sensory processing mechanisms of plasticity, like the auditory process we examined in this study, provide an important means to meet this challenge of translation.”
The double-blind, placebo-controlled study included 30 participants with major depressive disorder who had not responded to at least 2 recognized treatments for depression. Seven in 10 participants demonstrated a 50% or greater decrease in their depression symptoms one day after receiving ketamine.
“In this case we used what’s called an ‘auditory mismatch negativity’ task to assess short-term plasticity and predictive coding, or the brains adaptability and tendency to try to predict what’s coming next,” Sumner said.
The researchers used electroencephalogram (EEG) technology to measure brain activity as the participants listened to a sequence of auditory tones that occasionally included an unexpected noise. The brain automatically generates a particular pattern of electrical brain activity called mismatch negativity (MMN) upon hearing an unexpected noise.
Sumner and her colleagues found that ketamine increased the amplitude of the MMN several hours post-infusion, suggesting that the drug increased sensitivity to prediction error.
“We found that just 3 hours after receiving ketamine the brains of people with moderate to severe depression became more sensitive to detecting errors in its predictions of incoming sensory information,” she told PsyPost.
“To provide context, the brain creates models or predictions about the world around it and what is most likely to come next. This is largely thought to be because it is an efficient way to deal with the massive amount of information hitting our senses every moment of the day. When something is constant and stable in the world these models can become very rigid. It has been suggested that these models can become too rigid and unchanging, underlying negative ruminations and self-belief that people with depression often report.”
“As an example of how this might look in depression — it is often easy for friends and family to point out to their loved one errors or the harm in their thought patterns,” Sumner explained. “A counsellor will often work with a person to change their harmful ruminations or beliefs, such as with cognitive behavioral therapy (CBT). However, the person experiencing depression may find this difficult to see, or to take on because of how rigid their models (belief about themselves, the world around them, their future) have become.”
The participants also completed a visual task to measure long-term potentiation (LTP), the ability of neurons to increase communication efficiency with other neurons. An analysis of that data, published in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, found evidence that the antidepressant effects of ketamine were associated with enhanced LTP.
“Ketamine may be working by increasing plasticity (the ability to adapt and learn new things), as well as increasing the brain’s sensitivity to unexpected external input that is signaling errors in its own rigid expectations,” Sumner said.
The main limitations of the new research are the lack of a control group and relatively small sample size. But Sumner and her colleagues hope that their future research will shed more light on whether ketamine can help to defeat harmful cognitions.
“The task we used involves presenting beeps through some headphones, and while it provides a highly controlled way to measure plasticity and sensitivity to unexpected input, it is pretty far removed from the complexity of the experience of depression itself. The next study should replicate our finding, and aim target and relate the change in the mismatch response and connectivity to higher level brain functions,” Sumner told PsyPost.
“Building on this finding may help provide evidence for the use of ketamine to facilitate or enhance people’s ability to engage with and benefit from therapies like CBT, by putting the brain into a more plastic state, ready to update its models.”
abstract Prediction error sensitivity is improved by ketamine in patients with depression.
•Forward projecting connectivity is correlated with the antidepressant response.
•Right inferior temporal cortex activation and connectivity is central to these results.
•Aberrant repetition suppression may not be improved by ketamine in the short-term.
Major depressive disorder negatively impacts the sensitivity and adaptability of the brain's predictive coding framework. The current electroencephalography study into the antidepressant properties of ketamine investigated the downstream effects of ketamine on predictive coding and short-term plasticity in thirty patients with depression using the auditory roving mismatch negativity (rMMN). The rMMN paradigm was run 3–4 h after a single 0.44 mg/kg intravenous dose of ketamine or active placebo (remifentanil infused to a target plasma concentration of 1.7 ng/mL) in order to measure the neural effects of ketamine in the period when an improvement in depressive symptoms emerges. Depression symptomatology was measured using the Montgomery-Asberg Depression Rating Scale (MADRS); 70% of patients demonstrated at least a 50% reduction their MADRS global score. Ketamine significantly increased the MMN and P3a event related potentials, directly contrasting literature demonstrating ketamine's acute attenuation of the MMN. This effect was only reliable when all repetitions of the post-deviant tone were used. Dynamic causal modelling showed greater modulation of forward connectivity in response to a deviant tone between right primary auditory cortex and right inferior temporal cortex, which significantly correlated with antidepressant response to ketamine at 24 h. This is consistent with the hypothesis that ketamine increases sensitivity to unexpected sensory input and restores deficits in sensitivity to prediction error that are hypothesised to underlie depression. However, the lack of repetition suppression evident in the MMN evoked data compared to studies of healthy adults suggests that, at least within the short term, ketamine does not improve deficits in adaptive internal model calibration.
small molecule cognitive enhancer reverses age-related memory decline in mice
karen krukowski et al. 2020
doi.org/10.7554/elife.62048
all systems red
martha wells 2017
artificial condition
martha wells 2018
rogue protocol
martha wells 2018
exit strategy
martha wells 2018
network effect
martha wells 2020
vita nostra
dyachenko marina, dyachenko sergey 2012
earth in mind: on education, environment, and the human prospect
david orr 2013
mastering science with metacognitive and self-regulatory strategies: a teacher-researcher dialogue of practical applications for adolescent students
suzanne e. hiller 2017
you can do anything: the surprising power of a “useless” liberal arts education
george anders 2017
from failure to success: everyday habits and exercises to build mental resilience and turn failures into successes
martin meadows 2017
kickstarting your academic career skills to succeed in the social sciences
robert ostergard jr., stacy fisher 2017
the genius within: smart pills, brain hacks and adventures in intelligence
david adam 2018
ready, study, go! smart ways to learn
khurshed batliwala 2018
networks of mind: learning, culture, neuroscience
kathy hall, alicia curtin 2013
my plastic brain: one woman’s yearlong journey to discover if science can improve her mind
caroline williams 2018
the indoor epidemic: how parents, teachers, and kids can start an outdoor revolution
erik shonstrom 2017
experiential learning experience as the source of learning and development
david kolb 2014
learn better: mastering the skills for success in life, business, and school, or, how to become an expert in just about anything
ulrich boser 2017
the secret life of the mind: how our brain thinks, feels and decides
mariano sigman 2017
the genius checklist: nine paradoxical tips on how you can become a creative genius
dean keith simonton 2018
understanding how we learn: a visual guide
yana weinstein, megan sumeracki, oliver caviglioli 2018
peer instruction: a user’s manual
eric mazur 1996
never stop learning: stay relevant, reinvent yourself, and thrive
bradley staats 2018
cognitive gadgets : the cultural evolution of thinking
cecilia heyes 2018
cognitive gadgets: our thinking devices – imitation, mind-reading, language and others – are neither hard-wired nor designed by genetic evolution
cecilia heyes 2019
https://aeon.co/essays/how-culture-works-with-evolution-to-produce-human-cognition
as expected, very human–centric,especially in the way other species are denigrated as unable to learn in the way we profess to
just–so stories indeed, by definition: “if a cognitive ability is found not only in humans but also in other animals, its development is very unlikely to depend on culture. More generally, when species that are closely genetically related to humans, such as chimpanzees, have a more human like cognitive capacity than species that are distantly related to humans, such as rats, all other things being equal, it suggests that development of the focal capacity is heavily dependent on genetic information.”
my point isn’t that she’s biased or wrong, it is we are all biased and wrong most of the time, so what are we going to do about it?
the intelligence trap: why smart people do stupid things and how to make wiser decisions
david robson 2019
how the other half learns: equality, excellence, and the battle over school choice
robert pondiscio 2019
limitless mind: learn, lead, and live without barriers
jo boaler 2019
the nature of explanation
k. j. w. craik 1943
ego depletion is nothing of the sort. besides evidence that non glucose stimulation at the point of “depletion” can revive the subject, it may not even be the question of glucose depletion versus mental depletion.
it may instead be the difference between our habits and changing them. when we spend time doing strange new things or actively changing habits, tendency towards the mean or tendency towards equilibrium would suggest that the next action would be a habitual one rather than another change of habit. so it may be that the key to increase level of performance is to accumulate better general habits and review and maintain them.
it may be that the default behaviour selected by evolution is to follow habit changes with old habits or rest, to make sure we do not go too far astray in a short period
counting stars
one republic
☆lately, i’ve been, i’ve been losing sleep
dreaming about the things that we should be
but baby, i’ve been, i’ve been praying hard
say, no more counting dollars
we’ll be counting stars (yeah we’ll be counting stars)
i see this life like a swinging vine
swing my heart across the line
and my face is flashing signs
seek it out and you shall find
★old, but i’m not that old
young, but i’m not that bold
i don’t think the world is sold
i’m just doing what we’re told
i feel something so right
doing the wrong thing
i feel something so wrong
doing the right thing
i could lie, coudn’t i, could lie
everything that kills me makes me feel alive
☆repeat x2
i feel the love and i feel it burn
down this river, every turn
hope is a four-letter word
make that money, watch it burn
★repeat
i could lie, could lie, could lie
everything that drowns me makes me wanna fly
☆repeat x2
take that money
watch it burn
sink in the river
the lessons are learnt
confirming and dissolving (popperian falsifiability) to describe the two tendencies of science, is better and simpler to understand than confirmation and “falsifiability”.
philosophical papers, volume 1: the methodology of scientific research programmes
imre lakatos et al. 1978 not yet read
philosophical papers, volume 2: mathematics, science and epistemology
imre lakatos et al. 1978 not yet read
against method: outline of an anarchistic theory of knowledge
paul feyerabend 1993 not yet read