summaryrefslogtreecommitdiff
path: root/paper2/thesis.tex
diff options
context:
space:
mode:
Diffstat (limited to 'paper2/thesis.tex')
-rw-r--r--paper2/thesis.tex117
1 files changed, 61 insertions, 56 deletions
diff --git a/paper2/thesis.tex b/paper2/thesis.tex
index 6f68bb4..0fac526 100644
--- a/paper2/thesis.tex
+++ b/paper2/thesis.tex
@@ -46,7 +46,11 @@
% \noindent blah
% }
-\abstract{ABSTRACT
+\abstract{A model which simulates learning also has to account for the effect transfer has on new skills.
+ Learning a skill that shares steps with a previously learned one speeds up acquisition.
+ This thesis presents an ACT-R model of a task used by \citet{Frensch_1991} to investigate transfer learning.
+ It will give a general overview of learning in production systems and explain the components of the model.
+ Due to bugs in the used ACT-R implementation no results can be presented, however pain points in working with ACT-R will be discussed to motivate future work.
}
\begin{document}
@@ -54,21 +58,23 @@
\section*{Introduction}
-Transfer learning is the ability to apply lessons learning from one task, to another related or even unrelated task.
-Living in a complex environment like the real world, a plethora of different tasks like navigating areas, finding things visually or preparing a meal have to be done. \\
+When trying to understand how humans learn, transfer learning is particularly interesting.
+Skills acquired by training can speed up acquisition of different skill through some mechanism.
+Modeling this mechanism needs to take in account all of the steps the mind goes through when solving a task to re-use, or rather transfer them to another task.
+Unified Theories of Cognition are what \citet{newell1994unified} argues to be the approach to gain a complete understanding of the human mind.
+Also called cognitive architectures, they combine all of the specialties of the mind into one single framework, that ideally completely mimics what the human mind does.
+Using such an architecture, it should be possible to describe a task in detail and observe transfer learning to another task.
-much more efficient if knowledge from tasks can be reused in other tasks
+Transfer learning was previously examined by \citet{Frensch_1991} to differentiate the transfer effects between learning the components of a task and learning the composition of components in a task.
+They used an experiment shown by \citet{Elio_1986}, which uses multi-step mathematical equations, which have to be learned in different ordering conditions.
+To test transfer, one equation is swapped to a new one and the speed of learning it is taken.
+This kind of task seems appropriate to model in a cognitive architecture to see how it predicts transfer learning.
+ACT-R \citep{anderson2004} is an established cognitive architecture that uses productions to model procedures in the mind.
+There are several methods that use these productions to describe learning.
+\citet{Brasoveanu_2021} compared different reinforcement learning algorithms in one such method, although using a lexical task.
+For this they created a re-implementation of ACT-R in python \citep{Brasoveanu_2020}, which seems like a good starting point to implement Elios task in a model.
-\citet{Frensch_1991} observed differences in learning speed depending on condition, i.e.\ the order in which procedures are presented.
-
-% \citep{anderson}
-% \citep{Taatgen_2013}
-% \citep{Brasoveanu_2021}
-% \citep{Frensch_1991}
-% \citep{Elio_1986}
-
-Cognitive Architectures, modeling learning, production systems, ACT-R, frensch task
\subsection*{Productions}
@@ -107,8 +113,6 @@ Each production starts with a baseline utility value, which gets updated by the
\subsection*{Learning}
-\todo[inline]{Retrieval (activation) strength, utility learning, production compilation, \dots}
-
There are a variety of methods production systems use to model learning.
ACT-R can adjust which production is given preference during selection or create new productions based on existing ones and the models state.
@@ -121,8 +125,6 @@ When two productions are successfully called in a row, a production compilation
Since the compiled productions are specific to the buffer values when the compilation was done, there can be many different combined productions of the same two productions.
E.g.\ a production starting retrieval of an addition fact and a production using the retrieved fact can combine into specific addition-result combinations, skipping retrieval (Shown in Table~\ref{tab:prodcomp}).
-(do stuff allegory? learning general production from specific ones (not used))
-
ACT-Rs subsymbolic system also models delays and accuracy of the declarative memory, where retrieving memories can fail based on their activation strength.
Activation strength increases the more often a memory is created or retrieved.
Learning new facts and increasing their activation strength is also part of the learning process in an ACT-R model.
@@ -194,7 +196,7 @@ That means for each combination of x, y and z a different specific production ca
To investigate model behavior and potentially compare it to results from human experiments, it was decided to use an adapted version of the setup described in \citet{Frensch_1991}, which was first used in \citet{Elio_1986}.
Subjects are put in charge of determining the quality of water samples by performing simple mathematical operations with given indicator values per water sample.
-A water sample has an algae, a solids and multiple toxin and sandstone values, which are randomly generated for each sample.
+A water sample has an algae, a solids and multiple toxin and lime values, which are randomly generated for each sample.
There are six different 2-step equations that use these values and a seventh equation using all previously calculated results to determine the final result (see Table~\ref{tab:proc}).
To solve a procedure, subjects have to locate the values of used variables on the screen.
Some variables show multiple values, procedures using them indicate how it should be selected after an underscore.
@@ -268,7 +270,6 @@ To complete the experiment in a manner a human adult would, the model is given a
This includes basic knowledge of possible numbers and mathematical operations it has to solve.
\subsection{Implementation}
-\todo[inline]{chunktypes, pre-knowledge}
The model was made using the ACT-R architecture \citep{anderson2004} through the pyactr \citep{Brasoveanu_2020} implementation.
The base model uses default parameters.
To enable production compilation and utility learning, the parameters ``production\_compilation'' and ``utility\_learning'' have to be set to ``True''.
@@ -282,21 +283,21 @@ Procedure chunks hold the operations, variables and values that make up a proced
The math goal chunk is used in the goal buffer and hold various slots used for operations, like the current operation, arguments, counters and flags.
The model gets some basic knowledge that does not have to be learned in the form of chunks set at model initialization.
-It knows each procedure already and can retrieve its operations and values with an key. \todo{specify that it still has to find the correct procedure to use?}
+It knows each procedure already and can retrieve its operations and values with an key.
+It still has to find the right key by visually searching for the current procedure on the screen
It knows all numbers from 0 to 999 through the number chunktype.
-It has math operation chunks for all greater/less comparisons for numbers between 0 and 10. \todo{currently has even more chunks for some reason, check if necessary}
-It has math operation chunks for addition of numbers between 0 and 21.
+It has math operation chunks for all greater/less comparisons for numbers between 0 and 20.
+It has math operation chunks for addition of numbers between 0 and 20.
All trials are generated before the simulation starts and ordered depending on condition.
The model uses an environment to simulate a computer screen.
-Elements are aranged in columns with the values in rows below their column header. \todo{get the pyactr tk working and put screenshot}
-Everytime the user inputs an answer or the variables change, the evironment variables are directly updated.
+Elements are arranged in columns with the values in rows below their column header.
+Every time the user inputs an answer or the variables change, the environment variables are directly updated.
User input and trial change is detected from the model trace.
The model works through the tasks with a set of productions, which perform mathematical operations, search the screen, input answers and organize order of operations.
\subsubsection*{Greater/Less-than Operation}
-\todo[inline]{Maybe better as figure note or in appx.\ and simpler/shorter description}
This pair operations compares two multi-digit numbers and sets the greater/less number as answer.
For each digit (hundreds, tens, ones) there is a set of productions comparing that digit of the two numbers.
@@ -310,20 +311,20 @@ Depending on the result, either number 1 or number 2 will be written into the an
This operation adds two numbers through column-addition.
The first production retrieves the sum of the ones digits of the two numbers.
The sum is put into the ones digit of the answer.
-Next it tries to retrieve an addition operation from memory, where ten plus any number equals the previously found sum. \todo{maybe 10 instead ten}
+Next it tries to retrieve an addition operation from memory, where 10 plus any number equals the previously found sum.
If the retrieval fails, the result of the ones addition was less than ten and no carry-over is necessary.
-If the retrieval succeeds, a carry flag is set and the second addend of the retrieved operation (the part over ten) is set as the ones digit answer.
+If the retrieval succeeds, a carry flag is set and the second addend of the retrieved operation (the part over 10) is set as the ones digit answer.
Now the sum of the tens digits of the numbers is retrieved.
-If the carry flag is set, add one to the sum.
+If the carry flag is set, add 1 to the sum.
Again check for remainder and set a carry flag if necessary.
Then the same repeats for the hundreds digits.
\subsubsection*{Multiplication Operation}
This operation multiplies two numbers through repeated addition.
-Multiple productions handle cases in which one of the arguments is one or zero and directly set the answer accordingly.
-First, it tries to retrieve the sum of the second argument plus itself and sets a counter to one.
-If the retrieval succeeds, set the answer to the sum and increment the counter by one.
+Multiple productions handle cases in which one of the arguments is 1 or 0 and directly set the answer accordingly.
+First, it tries to retrieve the sum of the second argument plus itself and sets a counter to 1.
+If the retrieval succeeds, set the answer to the sum and increment the counter by 1.
While the counter is not equal to argument 1, retrieve the sum of argument 2 plus the result and increment counter.
If the counter is equal to argument 1, the operation is finished.
If the retrieval of the sum fails, save arguments and counter in different slots and change the current operation to addition, as well as the next operation to multiplication.
@@ -340,7 +341,7 @@ Additionally a carry variable will be set, which increases the subtrahend by 1 o
The motor module is used to input the answers and to press continue.
When the current operation is to type the answer, the first production requests the tens digit to be pressed on the keyboard.
-When the action is finished, the ones digit and spacebar to continue are requested to be pressed in turn.
+When the action is finished, the ones digit and space bar to continue are requested to be pressed in turn.
\subsubsection*{Visual System}
@@ -367,42 +368,42 @@ One production detects if the current operation is finished and another operatio
Since operations use both the full numbers and their digits, a set of productions fills digit slots with the digits of a number and vice versa.
-\begin{figure}[H]
- \centering
- \caption{Logic Flow of Addition}
- \label{fig:addition}
- %\includegraphics[width=1.1\textwidth]{frensch.png}
+% \begin{figure}[H]
+% \centering
+% \caption{Logic Flow of Addition}
+% \label{fig:addition}
+% %\includegraphics[width=1.1\textwidth]{frensch.png}
- \bigskip
- \raggedright\small\textit{Note}. When each production is executed depending on state. Either example for one operation or figures for all?\end{figure}
+% \bigskip
+% \raggedright\small\textit{Note}. When each production is executed depending on state. Either example for one operation or figures for all?\end{figure}
\section*{Results}
-Without enabling the subsymbolic system and its learning algorithms, the average time the model takes to solve a specific procedure stays the same over the experiment (Figure~\ref{fig:RT}).
+Without enabling the subsymbolic system and its learning algorithms, the average time the model takes to solve a specific procedure stays the same over the experiment.
This is expected; while each finished mathematical operation does get remembered by the model, the amount of argument with operation permutations is too high to be useful in this few trials.
Due to multiple roadblocks in working with the subsymbolic system in pyactr, it was not possible not simulate a full experiment run with it enabled.
Details about these difficulties will be reviewed in the Discussion.
-\begin{figure}[H]
- \centering
- \caption{Mean solution time in acquisition and transfer phase}
- \label{fig:RT}
- % \includegraphics[width=1.1\textwidth]{RT.png}
+% \begin{figure}[H]
+% \centering
+% \caption{Mean solution time in acquisition and transfer phase}
+% \label{fig:RT}
+% % \includegraphics[width=1.1\textwidth]{RT.png}
- \bigskip
- \raggedright\small\textit{Note}. Mean solution time of all six procedures of a water sample in blocks of five samples.
- \end{figure}
+% \bigskip
+% \raggedright\small\textit{Note}. Mean solution time of all six procedures of a water sample in blocks of five samples.
+% \end{figure}
-\begin{figure}[H]
- \centering
- \caption{Comparison with human experiment}
- \label{fig:RTcomp}
- % \includegraphics[width=1.1\textwidth]{RT.png}
+% \begin{figure}[H]
+% \centering
+% \caption{Comparison with human experiment}
+% \label{fig:RTcomp}
+% % \includegraphics[width=1.1\textwidth]{RT.png}
- \bigskip
- \end{figure}
+% \bigskip
+% \end{figure}
\section*{Discussion}
@@ -427,6 +428,8 @@ Such a library would additionally serve as an example of proper implementation o
\subsection*{Model Improvements}
+Most importantly, solving the production compilation problem and actually comparing the models learning behavior with human data would be the next step from this point on.
+
While model currently does not work correctly, there are a variety of improvements possible after technical issues are removed.
Mathematical operations could be modeled much more general and to work with higher and negative numbers.
This would make it possible to learn mathematical facts from the ground up, instead of relying on a set of given knowledge.
@@ -434,7 +437,9 @@ Introducing multiple ways of doing an operation, like addition by counting from
Another important improvement would be better switching between tasks, as e.g.\ multiplication requires additions being performed.
This required a complex set of production, which a general task switching implementation could simplify.
-Most importantly, solving the production compilation problem and actually comparing the models learning behavior with human data would be the next step from this point on.
+It would be interesting how other cognitive architectures behave in comparison to ACT-R.
+SOAR \citep{laird2022introductionsoar} and especially PRIMs architecture \citep{Taatgen_2013}, which specializes in transfer of knowledge through small knowledge bits.
+
\printbibliography{}