2005-2006 MICS Student Research
By Rebecca Price
Some Applications of Number Theory to Games and Puzzles
Advisor: Dr. Maria Zack
This project will consider the application of techniques from Number Theory to games and puzzles. For example, the new number puzzle craze Sudoku is directly connected to the number theoretic topic of magic squares.
By Rebecca Cooper
Building Models of Church Attendance and Giving
Advisor: Dr. Greg Crow
Pastors and laymen have wondered if there was a connection between attendance and giving in churches. To try and answer this question, data on both church attendance and giving will be collected from the last ten years. Explanatory variables may include when schools are in attendance, holidays such as Christmas and Easter, whether the church had a senior pastor, and how many services there were on Sunday mornings. This data will also be collected. Multivariate methods will be used to analyze the data and then to make models of church attendance and giving.
By Justin Kerk
Machine Learning of Complex Behaviors by Remembering Sequences that Lead to Rewards
Advisor: Dr. Jeff McKinstry
Neuroscientists have discovered many brain regions that store and recognize sequences of stimuli. We investigate through computer simulation whether complex tasks can be learned by storing sequences that predict reward. Our algorithm remembers sequences of past stimuli resulting from a complex task (modeled by a finite state machine), and uses these sequences to uniquely identify the current “state” necessary for the well-known “Q learning” algorithm, which is guaranteed to learn to optimize the total expected rewards received. Several tasks used by neuroscientists were used to test this novel combination of sequence learning and Q learning to optimize behavior. Initial results were successful only for the simplest tasks. Ongoing work is exploring the reasons for failure on the more complex tasks.
By Ryan Hayes
Learning to Predict the Future by Storing Sequences from the Past
Advisor: Dr. Jeff McKinstry
We have developed a new simple algorithm for learning to predict the future, inspired by the sequence-storing and matching capability of the brain. The algorithm simply stores and matches sequences of inputs and outputs observed when an observed subject interacts with a device, for example a subject interacting with an ATM machine. This algorithm implicitly learns a model of a certain class of devices, and will always correctly predict the output of the device given enough sample data and long enough sequences. This means that the brain could learn fairly complex tasks just by storing and matching sequences.
Using computer simulations, we have tested the algorithm on nine complex “cognitive” tasks used by neuroscientists to train primates. The algorithm quickly learns to correctly predict the next output that the simulated test apparatus will produce. The algorithm correctly predicted the next output of the machine with high accuracy on complex tasks simply by storing sequences of observations. These results confirm the power of sequence storage, ubiquitous in the brain, and suggests that tasks which can be solved by storing a finite number of sequences may be too simple to probe the more advanced cognitive capacity of primates.