MEG and EEG provide rich temporal information by measuring the electromagnetic fields generated by neuronal currents with millisecond resolution. Since such fields are measured outside the head and result from a superpositioning of sources originating from many different anatomical locations, our ability to provide information on spatiotemporal dynamics associated with distinct anatomical locations is limited by the accuracy of the inverse model. Many inverse models are currently in use ranging from simple dipole fits to more sophisticated techniques including spatial filtering techniques (such as beamformers), Bayesian models, extended Kalman filters and particle filters. In the past several years I have developed state-of the-art inverse models based on spatial filtering techniques encompassing beamformers, spatially matched filters, minimum-norm, LORETA and sLORETA and studied the performance of such models under various scenarios using simulations and empirical data. These algorithms are currently in use by colleagues at SickKids Hospital and Baycrest Hospital. In addition to high localization accuracy, they feature advanced computation techniques such as POSIX threading that allow a full characterization of spatiotemporal dynamics in tens of thousands of brain voxels from a typical MEG dataset (encompassing hundreds of thousands of samples from 150 channels) in less than 30 seconds. I further worked on modifying these techniques to provide higher localization accuracy and better constraints, particularly in the context of localization of deep sources such as hippocampal and frontal sources. We have used these techniques to study stuttering in adults and children and human amblyopia, as well as cognitive memory processes from a transverse patterning task and a virtual Morris water maze. For more information see Chapter 4 in Magnetoencephelography.