<?xml version="1.0" encoding="utf-8" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>sth-about-psych</title>
<link>https://ameblo.jp/sth-about-psych/</link>
<atom:link href="https://rssblog.ameba.jp/sth-about-psych/rss20.xml" rel="self" type="application/rss+xml" />
<atom:link rel="hub" href="http://pubsubhubbub.appspot.com" />
<description>I'm gonna write about something about psychology</description>
<language>ja</language>
<item>
<title>声</title>
<description>
<![CDATA[ <span style="font-weight: bold;">   どんなに叫んでも</span><br style="font-weight: bold;"><span style="font-weight: bold;">どんなに大きな声を上げて叫んでも</span><br style="font-weight: bold;"><span style="font-weight: bold;">　ワタシの声は誰の心にも</span><br style="font-weight: bold;"><span style="font-weight: bold;">届かないのかも知れないと</span><br style="font-weight: bold;"><span style="font-weight: bold;">思う。</span><br style="font-weight: bold;"><br style="font-weight: bold;"><span style="font-weight: bold;">　声がかれるくらい</span><br style="font-weight: bold;"><span style="font-weight: bold;">ワタシはココにいるのだと</span><br style="font-weight: bold;"><span style="font-weight: bold;">　叫んでも</span><br style="font-weight: bold;"><br style="font-weight: bold;"><span style="font-weight: bold;">　誰も</span><br style="font-weight: bold;"><span style="font-weight: bold;">誰も気づかない。</span><br style="font-weight: bold;"><br style="font-weight: bold;"><br style="font-weight: bold;"><br style="font-weight: bold;"><br style="font-weight: bold;">
]]>
</description>
<link>https://ameblo.jp/sth-about-psych/entry-10141256334.html</link>
<pubDate>Fri, 19 Sep 2008 20:15:46 +0900</pubDate>
</item>
<item>
<title>Long Term Memory</title>
<description>
<![CDATA[ <img alt="ミニー" src="https://emoji.ameba.jp/img/user/lo/lo-dona-ve/392.gif"><font size="4" style="font-weight: bold; color: rgb(255, 20, 147);">Forgetting</font><img alt="ミッキー" src="https://emoji.ameba.jp/img/user/lo/lo-dona-ve/383.gif"> <br><br><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif">Paired associates lerning<br><br><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif">Researchers have used to this task inference in 2 ways.<br>    -<span style="color: rgb(255, 20, 147);">proactive inference</span>: refers to the fact that previous learning can make retention of subsequent learning more       difficult.<br>    -<span style="color: rgb(255, 20, 147);">retroactive inference</span><br><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif">How exactly does inference work?<br>    -Anderson&amp;Neely<br>    -<span style="color: rgb(255, 20, 147);">retrieve cue</span> points to, and leads to the recovery of, a target memory.  However, when that retrieval cue becomes associated to other targets, during retrieval the 2nd target competes with the 1st one. <br>    -Anderson's <span style="color: rgb(255, 20, 147);">fun effect: more time sent st</span>udy at the concept the more time required to retorieve info.<br><br><span style="font-weight: bold; color: rgb(128, 0, 128);">Retrieval of info<br><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif"></span><span style="color: rgb(0, 0, 0);">There  are some principles of retrieval that can be used to aid recall. <br></span><br>1. <span style="color: rgb(255, 20, 147);">categorisation</span>: material organised into categories or other units is more easily recalled than info with no apparent organisation (this effect happens even when organised material is initially presented in randon order)<br><br><br>2. <span style="color: rgb(255, 20, 147);">encoding specificity<span style="color: rgb(0, 0, 0);">: <br>   -mateiral is first put into LTM, encoding depends on the <span style="text-decoration: underline;">context</span> in which the material is learned.<br><br>3. context effect<br><br>4. state-dependent learning<br><br>5. mood-dependent momory effect<br>6. spaicing effect<br>-a variety of theories seek to explain the spacing effect.<br>-encoding variability: <br><br><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif">Cue overload: a retrieval cue is most effective when it is highly distinctive and not related to any other target memory.<br><br><span style="color: rgb(128, 0, 128); font-weight: bold;">The levels of processing view<br><br></span><span style="color: rgb(128, 0, 128);"><span style="color: rgb(0, 0, 0);"><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif">Levels-of-processing theory of memory<br>-In this model, memory is thought to depend not on how long material is stored or on the kind of storage in which the material is held, but on the initial encoding of the info to be remembered.  <br>-The fundamental assumption is that retention and coding of info depend on the kind of perceptual analysis done on the material at encoding.<br><br><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif">incidental learning<br>-Any learnig that is not in accord with the Ps purpose is called incidental learning.<br><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif">deep processing<br><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif">shallow processing<br><br>Memory Reconstruction<br><br><img alt="おんぷ" src="https://emoji.ameba.jp/img/user/sa/saki-57/766.gif"><br><br></span></span><span style="color: rgb(128, 0, 128); font-weight: bold;"><br></span></span></span>
]]>
</description>
<link>https://ameblo.jp/sth-about-psych/entry-10134957512.html</link>
<pubDate>Wed, 03 Sep 2008 15:36:00 +0900</pubDate>
</item>
<item>
<title>Working memory</title>
<description>
<![CDATA[ Working  memory<br><br>Central exective(CE)<br><br><span style="font-weight: bold;">Phonological loop (PL)</span><br><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10632.gif">Phonological loop is to <span style="color: rgb(255, 20, 147);">auditory material</span><br><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10632.gif">used to carry out subvocal rehearsal to maintain berbak material <br><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10632.gif">2 components-a short term phonological buffer&amp;subvocal rehearsal loop<br><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10632.gif">the idea here si that when the P initially encounters info, particularly verbal info, she translates it into some sort of <span style="color: rgb(255, 20, 147);">auditory code</span> and processes it through the phonological loop.  <br><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10632.gif">limited capacity being able to hold 1.5-2 sec of speech based info<br><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10632.gif">Daneman &amp; Carpenter's study<br><br><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10646.gif"><span style="color: rgb(255, 0, 0);">Articulatory suppression<br>    <span style="color: rgb(0, 0, 0);">-this affects recall performance by <span style="text-decoration: underline;">preventing the use of the rehearsal loop</span> to maintain items in the phonological store, or inhibiting the encoding if visual info into phonological form</span><br style="color: rgb(0, 0, 0);"><span style="color: rgb(0, 0, 0);">    -memory span has been found to be affected by AS</span></span><br style="color: rgb(0, 0, 0);"><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10646.gif">Phonemic S<br style="color: rgb(0, 0, 0);"><br style="font-weight: bold;"><span style="font-weight: bold;">Visuospatial sketchpad (VSSP)</span><br><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10632.gif">Visuospatial sketchpad is to <span style="color: rgb(255, 20, 147);">visual material.<br><br><br></span><img alt="蝶々" src="https://emoji.ameba.jp/img/user/ro/rosa-n24/10632.gif"><br><br><br> <br>
]]>
</description>
<link>https://ameblo.jp/sth-about-psych/entry-10134639181.html</link>
<pubDate>Tue, 02 Sep 2008 17:46:15 +0900</pubDate>
</item>
</channel>
</rss>
