Piaget on learning

Jean Piaget

I’ve written before about Piaget, but this time I’m doing it to try to shed some light about the idea of a pre-epiphany and post-epiphany state of mind when it comes to learning a new concept. But, first of all, let us define our terms appropriately to have a better understanding of what is coming:

Propositional attitudes vs. propositional content. Enter philosophy of mind

Whatever an idea or knowledge is, it can be reduced to information at the cognitive level of analysis. When we are dealing with propositional attitudes and their propositional contents, these contents are information and the attitudes we have towards them are the way we generally process them. So you belief that snow is white, ‘snow is white’ is the propositional content (the information) of your belief, and the belief itself of this content is the propositional attitude you have towards it (the way you generally process that information).

Were you to believe that evolution is wrong, that is to say that you reject or simply that you don’t belief in evolution. There is the content ‘evolution’ and there is the attitude of ‘rejection’ or ‘disbelief’. What makes some people believe that evolution is either right or wrong? Let’s have a look from a Piagetian and information-processing perspective:

Back to cognitive psychology

Piaget believed that children are like little scientists who play with the world around them to create a coherent and systematic whole to explain it. This whole does not have to be evidence-based, as a child may think that thunders are actually caused by God by the noise he makes when ‘he is rearranging his furniture’. Even if they don’t develop evidence-based ideas about how the world works, they do strive to develop coherent and logically possible ideas based on what they know.

What is knowledge?

To know something, from a cognitive psychological perspective, is just to have certain information stored in your long-term memory. You know what you did yesterday (if you remember, of course), you know how to add and subtract numbers (hopefully), and you know how to read and understand English. These are all things in your long-term memory. So, to know something is to have information stored in your long-term memory; information you can retrieve to work new one that is in your short-term memory.

Memory, memory, memory

What a nice workbench

Having Murdock and Atkinson & Shiffrin in mind, since we don’t need to delve too deep into more intricate models of memory such as Baddeley and Hitch’s, memory is like a two-store system where you have information immediately available to you in STM (short-term) and also a lot of subconscious stored memory in LTM (long-term).

Short-term memory is like a passive store where the information that is presently available to you remains. What you are seeing right now, for instance, is in your STM, contrary to what you did on your last birthday, which is in your LTM and now is in your STM because you are thinking about it!

Information in STM doesn’t last long. Information here is either lost or transferred to LTM. How is information in STM transferred to LTM?

It depends on a series of processes called control executive processes by authors Atkinson and Shiffrin, but perhaps this term is way older than their research on memory. Control executive processes are conscious processes we use to manipulate the information in our STM and LTM in what we call working memory (WM). Working memory is like a workbench where you use the materials that you have in STM and LTM to create a new product which is either going to be lost or transferred into LTM as well.

Some of these control executive processes are elaboration, organization, imagery, context, rehearsal, and retrieval.

Rehearsal refers to repeating information in STS over and over again so it is not lost. Think about any moment you had to keep a number in your head so you kept repeating it to yourself over and over again.

Retrieval means taking information from your LTM and blending it with your STS. If I ask you to tell me what you think about x, you will start retrieving from LTM what you think about x first and then think about how you would answer to me. What do you think about Pixar’s most recent movie?

Elaboration entails adding meaning to new information by connecting it with already existing knowledge. So, when I tell you that memory functions as a store and that working memory is like a workbench, I am connecting new information with already existing knowledge in your LTM so you can properly grasp the meaning. Basically, the way in which our previous knowledge is structured influences the way in which new information is stored and subsequently recalled.

Bartlett, an early cognitive psychologist, introduced the notion of an schema. Schemas are our internal representations about how the world works and whatever information we receive is influenced by these schemas.

Enter Piaget?

French psychologist Jean Piaget’s early approach to cognition was that of a qualitative theory of development where children leap from one stage to another. In each cognitive stage, children have different cognitive abilities. Nonetheless, here what matters to me is his later quantitative approach to cognition: relying on schemas to construct his theory about the way we develop our cognitive abilities.

Schemas, as our mental representations about how the world works, influence the way we learn new things, but we aren’t bound or ruled by these schemas. Our schemas can change in the future as we learn new things.

According to Piaget, cognitive development depends on four aspects: maturation, social interactions, activity (interaction with nature), and equilibration. The first three are easier to understand without explaining them. Now, what is equilibration?

When we are presented new information and evidence that wasn’t part of our schemas, we start assimilating new information. When we assimilate new schemas that are incompatible with our previous ones, we enter a process of equilibration where we start organizing and accommodating new knowledge with previous one in order to create a coherent whole. Of course, whether we reach equilibration or not, depends on how much information we are given and whether this information is properly transmited.

To say that we don’t understand an idea, or to be in a pre-epiphany state, would then be to say that we are going through a process of equilibration where we need to accomodate and organize enough knowledge in order to finally comprehend the idea. In order to achieve this, new information needs to be elaborated, and organized. As we grasp all the necessary evidence to comprehend an idea and we are given the facts in a clear fashion, we finally reach that ‘click’ which wasn’t possible because there were still conflicts (or disequilibrium) between our past and present schemas. A ‘click’ is, then, the event when we finally reach equilibration when information is at long last properly organized and accommodated in our minds.

Of course, to say that you understand an idea is not to state that you believe that idea. There is a difference between propositional content and propositional attitudes. Whether you believe in evolution after it has been properly explained to you is a matter of propositional attitudes, which may result from a cognitive bias.

So, what do you think? Did I miss anything? Or perhaps this was not the answer you were looking for?

Published by Fred M R

I am a language instructor and amateur philosopher. I speak native Spanish, fluent English (C2), intermediate French (A2-B1), and basic Japanese (JLPTN5).

6 thoughts on “Piaget on learning

  1. Thanks Fred! That was interesting, and makes a lot of sense.

    Knowing a bit about the underlying neuroscience, I think LTM is stored in associations throughout the brain. A particular LTM is probably stored in numerous locations which are bound together when it’s retrieved. STMs likely begin in both those locations and the hippocampus, with the hippocampus, if conditions are right, repeatedly firing the LTM locations to strengthen the synaptic connections in the days or weeks after the event. Eventually it fades from the hippocampus, but by then the LTM association are strong enough to survive on their own. Although they probably won’t unless the memory is periodically retrieved.

    Working memory amounts to what in STM and LTM are currently have causal effects throughout the brain.

    All of which to say is that the schemata are physical instantiations requiring a lot of work in the nervous system. We probably shouldn’t be surprised that novel propositions requiring a lot of adjustments take time and effort.

    Thanks again for putting it together!

    Liked by 2 people

  2. Fred,
    This stuff gets a bit confused for me since I use the “memory” term in one rather than several ways. So first I’ll outline my own model and then get into “the epiphany”.

    In my “dual computers” model (as in “brain” and “mind”), I term “memory” as one of three varieties of input to the mental form of computation by which we know existence. It’s essentially defined as “past consciousness that remains accessible to the present”. But then to better grasp what I’m talking about here one must grasp my conception of present consciousness (or the very thing which may later return in the form of memory).

    The central input to the conscious processor as I see it may be referred to as “valence”, or an element which feels anywhere from very good to very bad. I consider this essentially as fuel which drives the conscious form of function, somewhat like electricity drives the computers that we build. A perfect void here should eliminate qualia in general, and thus consciousness as I define the term.

    Then there’s a perfectly informational variety of input which may be referred to as “senses”. “Vision” would be a classic example of this, though associated valences should exist in vision as well, or the previous form of conscious input. Conversely “taste” is generally more divided between valence and information components. In any case the central nature of consciousness here is a “feel good/bad” component, and an “information” component to the entity that feels good/bad.

    Thus memory will consist of valence and sense inputs, though in a somewhat recalled and degraded way. Though we may remember a given pain for example, the memory doesn’t generally produce much in the way of valence. I often make no distinction between duration and other forms of memory, though those distinctions certainly can be useful as well.

    In my model these three varieties of input are not processed by means of “working memory”, but rather by means of “thought”. The thought processor interprets them and constructs scenarios about what to do to promote its valence based interests. There is just one form of output that the conscious processor has at its disposal that I know of (that is beyond “more thought”). This is “muscle operation”, or a title that should be self explanatory.

    So now to “the epiphany”. I’d rather not classify memory recall under this term, since “remember” seems sufficient. Even the word “know” may be a bit weak for an epiphany. Bartlett’s “schemas” does seem about right, though let me inject the concept of “reduction” as well. A child may be told that thunder results when God moves his furniture around (as you’ve mentioned). Here the child might grasp how moving furniture makes sound, so God in Heaven doing this could make sense as thunder, or an effective reduction!

    I’ll also mention that a person familiar enough might “remember” the model that I’ve presented above, though to grasp it in a working rather than just lecture level capacity, such complex ideas may need to be manipulated and experimented with to demonstrate what they both do and do not effectively mean. In this manner they might be reduced effectively enough for full epiphany.

    Regarding the memory storage of an epiphany, or any conscious input, I’ve noticed that repetition seems important. Furthermore conscious input / processing with greater valence based implications should tend to be thought about more, and thus better remembered in general.

    Regarding equilibration, yes that needs to happen given conflicting ideas. When someone tells you something that challenges your existing beliefs (or schemas, or reductions) you’ll then have valence based interests to figure out where the failure happens to be so that you might function more effectively here. If we’re talking about discrepancies between accepted math or physics for example, the ideas probably aren’t grasped well enough for consistency. Or could it be that someone’s feeding you crap? Or maybe not, though you have some preexisting crap ideas which thus don’t conform? All to be assessed.

    Liked by 1 person

    1. So let me see if I get your idea first:

      Three types of input:
      1. Valence: driving input that feels anywhere from very good to very bad. Some sort of motivational input that either drives the machine towards or against certain courses of action.
      2. Sense: Sensory input from sense-organs. Sense inputs may also come in hand with valence inputs.
      3. Memory: Previous valence and sensory input stored in the mind.

      Valence and sense may be reduced to two properties of a single type of input (Occam’s Razor) which is both informational and motivational if we have that both are input and that both always come together as you say.
      So this is like a dual aspect theory of input. But, a prima facie problem with this is to assume that motivational drives are essential in the objects themselves that produce these inputs before they are processed by a mind (or dual computer).
      To say that an input has an essential valence property is to say that it is already good or bad without being first processed by a dual computer that decides, given its own structure, whether this input should be processed as good or bad. This problem is not solved if we reject the dual-aspect theory that I just proposed and instead embrace your original model of two separate types of inputs (valence and sense input).
      De facto, the same sense input could have a different valence input depending on what mind is processing it, so the valence input cannot be essential to the object causing it alongside the sense input. If I find pizza good and you find it bad, then there is a problem with valences being inputs from the outside world.

      Sticking with the dual aspect theory of inputs (valence/sense input), memory can be reduced to valence/sense input stored in the dual computer, right?

      Basically, the thought processor in this case works as a working memory that interprets and construct scenarios with present consciousness (or present valence/sense input) alongside past consciousness (stored valence/sense input=df. memory). The outputs of these interpretations and constructions are basically overt behaviors, or muscle operations.

      Liked by 1 person

      1. Yeah that’s a good stab at it Fred, and especially given how little information I gave you. Since I’ve got you this far I’ll present a more broad account that should help show where I’m coming from.

        From the top of this diagram we have a nervous system box attached with lines rather than arrows to another to reference that I consider the entire thing to function non-consciously. Then there are three lines to boxes which concern nervous system function, or its input, processing, and output. These boxes have arrows however, so now we can start following the path.

        The nervous system (or first of the two computers) accepts input information, processes it, and ultimately institutes various kinds of output function. One is obviously muscle function, though I also have a box for countless non-specified outputs since I don’t know what all the nervous system effectively “does”. More relevant to this discussion however is that the nervous system produces qualia, or a standard term that I consider effective for defining consciousness itself. (Furthermore I believe that there must be brain mechanisms which produce qualia, and I’m intrigued by the potential for the electromagnetic radiation associated with synchronous neuron firing to do so.) There are five boxes connected with mere lines rather than arrows, meaning that I consider them all to exist as qualia / consciousness (and maybe as EM radiation). Regardless this is the second of the dual computers, and we’ve been discussing these five components which make it up.

        Note that valence, senses, and memory have arrows that lead to the thought processor. (Which is qualia leading to more qualia? Yep.) Anyway you’ve wondered how there can be good/bad valence before the processor. I consider this needed to activate the system itself. Essentially if you had no input valence you should be perfectly lethargic, and I’d say to the point that your consciousness in general should simply vanish. But given the initial motivational spark of valence, the evolved thought processor should interpret it’s present inputs and construct scenarios about what to do, and often given its hopes (which feel good) and worries (which feel bad) about the future. Thus time sensitivity should exist here. It should have memory which joins it somewhat with its past as mentioned before, and hope/worry which should join it somewhat with perceived future experiencers in time. Then the last qualia box represents when muscle operation decisions are made. Here the nervous system must detect and implement this in order for such function to occur.

        While you could reduce valence and sense to one thing, since they are qualia input after all, I like keeping them separate. One creates an entity with motivation, while the other provides that entity with information to potentially use in its efforts to feel good rather than bad going forward.

        I agree that the same base input could feel good to one mind and bad to another. This might be a taste, image, sound, or whatever. Different people can have different interests. To go far more extreme than pizza, let’s say two people are in a fight to the death while their beloved families can only watch this happen. An image of the winner should be perceived by each family with opposing valences. I consider my model to account for such circumstances just fine.

        Yes like valence and sense could be reduced to one form of input, memory could be included in there too. I like breaking this up however since in practice we consider memories as degraded recreations of past qualia that tend to be invoked from time to time.

        You don’t need to be prompt with me. Depending on what’s happening over here I’m commonly not. But I’d love to show you how this model effectively works, and you do seem to have the right perspective.

        Liked by 2 people

        1. It gets a little more complicated the more I try to comprehend the nature of your three types of inputs and your dual computer system from this figure. Can you make a diagram where you show the interactions of the dual computer with its environment (like organism-niche interactions in functional terms), or is it already stated in the figure you showed me?

          Liked by 2 people

        2. I think I see the issue Fred, or hope I do. It could be that I (ironically) didn’t associate this diagram in terms of things that you already grasp, or my conception of how we have epiphanies. So try this:

          Though many seem to consider the human brain to function as “a dual computer”, which is to say as a single computer which has both conscious and non-conscious elements to it, my model is framed in terms of two fundamentally different varieties of computer under a distinct hierarchy. If we remove the entire “qualia / consciousness” series from my diagram for example, what’s left? This should represent the function of an organism with a central nervous system that accepts nerve input, processes it by means of synapse and nerve operation (I’m told including “AND”, “OR”, and “NOT” gates), and the resulting output should animate various instruments like muscles for its operation. I think that’s a pretty standard conception, though let me know if you wouldn’t make this reduction, or if it doesn’t seem clear. It could be that ants function in such a robotic way.

          If this “first computer” does seem effectively reduced, the second isn’t standard. So let’s see if you can get it propositionally (and then some day perhaps in attitude as well).

          I’m saying that for an evolve conscious entity, qualia in itself may effectively be said to exist in a computational capacity in its own right. For us this is essentially what’s perceived (or input), pondered (or processed), and decided (or output). It’s the medium by which existence is experience by you, me, and all else for which there is anything it is like to exist.

          The purpose of my model is to help the field of psychology finally gain a fundamental reduction regarding our function, that’s similarly effective as established ideas in harder forms of science.

          Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create your website at
Get started
%d bloggers like this: