covid
Buscar en
Journal of Applied Research and Technology. JART
Toda la web
Inicio Journal of Applied Research and Technology. JART Implementation of a Computational Model for Information Processing and Signaling...
Journal Information
Vol. 12. Issue 3.
Pages 568-584 (June 2014)
Share
Share
Download PDF
More article options
Visits
2496
Vol. 12. Issue 3.
Pages 568-584 (June 2014)
Open Access
Implementation of a Computational Model for Information Processing and Signaling from a Biological Neural Network of Neostriatum Nucleus
Visits
2496
C. Sanchez-Vazquez1, M. Avila-Costa2, F. Cervantes-Pérez3
1,2 Laboratorio de Neuromorfología Experimental y Aplicada. Facultad de Estudios Superiores Iztacala Universidad Nacional Autónoma de México México, D. F.
3 Universidad Abierta y a Distancia de México México, D. F., México
This item has received

Under a Creative Commons license
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (4)
Show moreShow less
Tables (1)
Table 1. Cell properties used in simulated NS circuitry.
Abstract

Recently, several mathematical models have been developed to study and explain the way information is processed in the brain. The models published account for a myriad of perspectives from single neuron segments to neural networks, and lately, with the use of supercomputing facilities, to the study of whole environments of nuclei interacting for massive stimuli and processing. Some of the most complex neural structures -and also most studied- are basal ganglia nuclei in the brain; amongst which we can find the Neostriatum. Currently, just a few papers about high scale biological-based computational modeling of this region have been published. It has been demonstrated that the Basal Ganglia region contains functions related to learning and decision making based on rules of the action-selection type, which are of particular interest for the machine autonomous-learning field. This knowledge could be clearly transferred between areas of research. The present work proposes a model of information processing, by integrating knowledge generated from widely accepted experiments in both morphology and biophysics, through integrating theories such as the compartmental electrical model, the Rall’s cable equation, and the Hodking-Huxley particle potential regulations, among others. Additionally, the leaky integrator framework is incorporated in an adapted function. This was accomplished through a computational environment prepared for high scale neural simulation which delivers data output equivalent to that from the original model, and that can not only be analyzed as a Bayesian problem, but also successfully compared to the biological specimen.

Keywords:
Safety Stock
Guaranteed-service time
Dynamic Programming
Automotive Industry
Resumen

Recientemente se han desarrollado modelos matemáticos que permiten explicar y definir a través de la ingeniería la manera como se procesa la información de señales eléctricas producidas por iones en el sistema nervioso de los seres vivos. Se han diseñado numerosas propuestas de este tipo de lo discreto a lo masivo, que operan como segmentos de una neurona, una red, y en últimas fechas con ayuda del supercómputo, hasta conjuntos de núcleos que interactúan en entornos de estímulos y procesamiento a gran escala. De las estructuras neurales más complejas y de más interés ha sido la del grupo denominado de los Ganglios Basales, de los que el Neoestriado forma parte, y sobre el cual se han hecho pocos trabajos de modelado computacional. Se ha demostrado que en esta región residen funciones de aprendizaje, y otras relacionadas con la toma de decisiones bajo las reglas de acción-selección que son ampliamente estudiadas en el aprendizaje autónomo computacional, permitiendo transferir el conocimiento de un campo de investigación a otro. El presente trabajo propone un modelo computacional en tiempo real, a través de integrar el conocimiento obtenido de experimentos ampliamente aceptados en biofísica, aplicando la teorías de compartimientos electrónicos, de la ecuación de cable de Rall, las leyes de potencial de partículas Hodkgin-Huxley, entre otros. Dichos modelos se incorporan en un entorno basado en la función de integrador con fugas, a través de un ambiente computacional de simulación neural a gran escala, que entrega una salida de datos equivalente al modelo biológico, susceptible a ser analizada como un problema Bayesiano, y comparada con el espécimen biológico con éxito.

Full Text
1Introduction

In the past two decades, researchers have increasingly become interested in building computer simulations of diverse brain structures, based upon morphological and physiological data obtained from biological experimental procedures.

The efforts for building off these constructs are only directed by the findings in biological models, leading to specific algorithms [1, 2]. Thus, they are aimed to the creation of neural simulation platforms --yet specifically designed for suiting a particular characteristic from a given region [3] -- or for general purposes, demonstrating that many functions are present in specific regions of the nervous system and can be applied generally, and at the same time, they are also present among many species at many levels of differentiation [4].

All of these tools have been useful both, for consistently recreating the findings at different scenarios, and for welcoming new proposals and directing new experiments, or even to predict new findings in diverse brain structures [5, 6]. The use of this kind of methodologies have made possible the simulation of neural processes at many levels of detail, analyzing from membrane regions with ionic channels for simulating the effects of neuromodulation and neurotransmitter action in membrane potential, and building off a whole neuron with all the electrophysiological responses [7, 8], to a specific cell network [9, 10]. These algorithms and computational environments are only limited by the current state of art of their respective experimental procedures on the one side, and for the computing capacity on the other [11-13].

Many simulations of diverse regions and networks as well as analysis of several information processing strategies about how this neural network works, have been published elsewhere [14-17]. For supporting this research, plenty of tools for building real time simulations of diverse brain structures have been reported [18]; thus helping and directing the biological findings trough experiment-biological cycles and perfecting each other in every iteration [19]. Given this knowledge production for biophysics, there is understandable growing interest in computer engineering field, to study the information processing in living neural structures, because the so-called “Intelligent Planning and Motivated Action Selection” [20, 21], which is a task well characterized in animal behavior, and also a computational problem intensively studied in artificial intelligence field [22-24].

These particular properties of information processing and decision making have been discovered on some brain structures as the respective methodologies for their study have been developed, and the equipment needed for the experimental procedures has been perfected. In mammals, the specialized brain structures where this functions have been demonstrated -but not well understood- are the basal ganglia (BG), which are located in sub cortical brain region [25, 26]. BG structures are composed by several nuclei, from which neostriatum (NS) is widely accepted as the main input nucleus [27-29]. Although there is a lot of theoretical approach about the information processing form of this region, the construction of respective real time computer models and analysis are just emerging [10, 25, 30, 31].

From the perspective of computational ccience, “Reinforcement-Learning” [32], and “Action-Selection” theories [33] have been developed many decades ago as theory for machine learning strategies [20, 34]. Therefore, they have been associated to some of the functions of BG [35, 36] and more specifically within the activity of NS [37,38]. However, related to this nucleus, only a few dynamical systems in real time have been built allowing integration, comparing and testing the experiences and data acquired from biological models to computational ones [39,40].

The present work extends the use of these methodologies, through the use of a general purpose neural simulator [41, 42] in a high demand computational environment, which served for building a simplified model of NS composed of mathematical models for the best characterized cell types, -- the main output neuron is one of them, organized in regions and delimited by the interconnection of their respective inter- neurons [43, 44]-. This whole structure was added with both excitatory external signaling and bi-modal modulation as inputs, assembling the functions of cerebral cortex and thalamus effect on the NS on one side, and dopaminergic (DA) effect on the other [45, 46]. All this model was strictly built based on morphological and physiological data reported from classical experiments reviewed in biological reports [47-49].

The output data of the model was processed and analyzed qualitatively against the biophysical experiments, and quantitatively by the same component current/voltage analysis methodologies which were used for characterizing separately each ionic currently studied in electrophysiology [50, 51], as is discussed in the results section.

2Neostriatum, Anatomy and Function

The anatomical and physiological data which form the basis of our model are well known, and are described in several reviews [43-45, 52]. Their function has been conceptualized as four nuclei that process information from the cerebral cortex related to the pathway regarding movement, posture and behavioral responses [53]. Initially BG function was associated to movement execution and feedback control [54, 55], this is because the first knowledge of BG was a condition known as “Parkinson Disease” which clinically expresses an impairment of motor responses [56, 57, 58]. Actually, it is known that BG are also involved in the process of attention and decision making, as explained above.

Anatomical and physiological studies have shown NS cellular architecture which reveals an internal network directed to the output of its main neuronal type: the medium spiny neuron (MSN) -an inhibitory type cell which forms a series of loops divided in two main classical circuits- calling direct and indirect pathways [59, 60] (Fig 1a).

Figure 1.

A, classical Model of NS Connectivity. This nucleus receives excitatory input mainly from both the cortex and the thalamus regions. Its architecture is composed of patches and matrices which under modulatory influence of DA, determine the output signaling by direct and indirect pathway to the basal ganglia nuclei. The main unit is the MSN, which generates a series of inhibitory, excitatory and modulatory connections inside the NS, as shown in B.

(0.06MB).

NS would also receive input from a nucleus that can change the internal state of the network: the DA action of “substantia nigra pars compacta”, which is not considered excitatory or inhibitory, but modulatory instead [61]. This means that a dual effect is produced over the natural output MSN neurons. Depending on several network variables this can either enforce or not –or both at the same time-, the action of cortical stimulation on the MSN.

The default function of the BG output nuclei is to exert a widespread tonic inhibitory control over target structures. This starts with the NS over influence of DA modulation, which is able to promote actions through disinhibiting their associated target structures while maintaining inhibitory control over others [62]. This prevailing model was proposed by Albin et al on 1989 [55], nevertheless, a full computational model still needs to be developed [11].

We have opted for a mathematically model simplified as the basis for validating the most relevant variables and incorporating them in more controlled manner. We have chosen to refer the electrical responses and morphology of MSN, within a minimum circuit and adding the least synaptic contacts necessary to obtain comparative results. Although there are several cellular sub types that contribute to affect and modulate the membrane potential of MSN, some of them are not yet fully characterized, or are still under further discussion.

The signaling of the cells that take part in the network within the NS is complex and particular. The MSN is a cell that is normally silent, but presents this special condition in its potential membrane that keeps its value dynamically oscillating, in some moments making it easier to be excited from a summatory input [63, 64]. In their default state, MSNs are largely silent and do not respond to low input levels. However, on receiving substantial levels of coordinated excitatory input, these cells yield a significant output whose magnitude may be subsequently affected by low-level inputs, which are ineffective when presented in isolation. This dichotomous behavior is described using the terms “down state” and “up state” respectively, for these two operation modes [50].

The remain types of interneurons that conform NS architecture have also particular properties for signaling: a) “Giant Cholinergic Aspiny Cell”, electrophisiologically called “Tonically Active Neuron”, (TAN) because it produces spontaneous bursts that affects directly the MSN [65, 66]. b) “Medium GABAergic” interneurons divided electrophisiollogicaly in two types: “Fast spiking and Plateau” (FS) and “Low Threshold” spiking named after these firing characteristics [67]. All these types of interneurons are the 3-10% of the total NS architecture, and profile its input/output function by interconnecting with MSN [68] in a network outlined in Fig 1b.

We chose to consider only afferents provided to the MSN that are better identified, such as FS neurons and TAN, and check the results in the simulations according to the biological model. First, FS [65, 66], characterized histologically as parvalbumin - immunoreactive neurons in MSN affect the proximal synapses with large amplitude IPSPs, and strong effect to block signals from the axons of projection of the MSN [67]. Secondly TAN, which are characterized as cholinergic neurons have a modulatory effect, because they are activated by cortical afferents with lower latency than MSN, which in turn are their respective targets [60].

3Methods3.1 Implementation..of..the..Neostriatum..Computati onal Cell components

A computational neural model, yet robotic or purely theoretical, has to be composed of elements that are bio-mimetic, --that is, they are intended to directly simulate neurobiological processes with the available computational resources and knowledge [69]. They have to be engineered in such a way that they provide an interface in order to allow the model to ask questions and handle some of all available variables in a controlled and limited way. A model that seeks to simulate complete behavioral competences also results impractical, because of the task scale, or impossibly, because of the lack of necessary neurobiological data, as many experiences have shown [13, 70, 71].

The process of building a biologically realistic model of a neuron, or else a network of such neurons, is based on the compartmental concept and involves the following three steps [72]:

  • a)

    Build a suitably realistic passive cell model, without the variable conductance.

  • b)

    Add voltage and/or calcium activated conductance.

  • c)

    Add synaptically activated conductance, and connect them to other cells in a network and provide artificial inputs to simulate the in-vivo inputs to the neuron.

The first two steps are explained bellow; the last one will be covered in the subsequent section.

For the first step, the key feature for performing excitability in a neuron is the ability for maintaining a voltage difference from inside (Vinside) to outside (Voutside). This is accomplished by the equilibrium potential Ei between ion concentrations divided by [C] given by the Nernst Equation (for the complete mathematical modeling process see the appendix in supplementary material):

In presence of several different ions in the cell, the equilibrium potential depends on the sum of their relative permeability. The eq (1) was integrated in the classical Goldman-Hodgkin-Katz solution [73-77]:

For using this theoretical approach for computing facilities, and solving it in a real time model, we need derivation of it in a linearized version:

With this, and based on experimental data, we obtain a form to predict the value of a membrane potential at a given time. Next, it needs to be implemented on an algorithm that represents a morphological model of the specific cell. We can represent a piece of a neuron as a simple RC circuit, which can be constructed in a connectionist point of view, and can be as complex as the computing facilities allow. Given the known capacitance of a piece of membrane, and starting with an initial voltage V(0), which can be obtained from another compartment serialized, or from an external input, like synapses or another stimuli, we have:

For giving a numerical solution of this passive model, it is much easier to simulate a neural activity by these compartments, where some particularities can be added like ionic membrane behavior, and morphological properties, thus allowing to differentiate neurons within a network [78]. The General Neural Simulator available for working with this technique solves differential equations with different integration techniques. Therefore, for a single compartment under a single ionic stimulation we have the following model:

Where A is the area of the membrane compartment and CM is specific Membrane Capacitance in terms of the area of the membrane, F/cm2. The actual membrane resistance (Rm) can be expressed in terms of area, as Rm = Rm/(4π r2). Thus, allowing to calculate a time constant (t) of the model as Tm=RmCm =AxCMA. We can calculate for a membrane patch:

and, using Tm as

Because of magnitudes (millivolts, milliseconds, and picoamps) and for being consistent to the International Unit System we can solve this equation as follows:

For complete mathematical deploying, see proper section in supplementary material. **

For physiological consistency, we use the inversion of resistance for calculation of ionic currents; thus gM = 103/RM is the membrane conductance in µS/cm2.

Finally for calculating the dissipation of the Voltage (V) between compartments, modeled as a continuous piece of membrane coupled with an axial resistance Ra, given the know morphological properties of the neuron, we use the Rall’s cable equation [79, 80].

This equation can be solved for several boundary conditions. axial resistance (Ra) depends on the cable geometry, diameter, length and if it is a sealed end or finite o semi-infinite cable [64, 81]. With these methodologies we have been coding, arise the three main types of neurons for this particular network: MSN, FS, and TAN. All of them were built using simplified morphological models, well tested and known as “Equivalent Cylinder models” [82, 83].

In the second step, we need to add the dynamic conductance of the ions gated in the cell, as needed for the three types of neurons used in this model. These represent the channels that drive the neuron electrical behavior. For the passive compartment explained above, the value of conductance -as inverse of resistance-, was obtained using a probabilistic function of ion diffusion interpreted as transitions between permissive or not permissive states of the molecular gates that the channels ions can cross trough; hence changing dynamically the conductance of each patch of membrane:

Where αi and βi are voltage-dependent rate constants describing the non permissive to permissive and permissive to non-permissive transition rates, respectively [84, 85]. For each of the three cells we modeled Na+, Ca+ dependent, and K+ ions are well known related variables and documented by their participation on shaping their output frequencies and wave morphology. All those responses where tested separately against the results published on the real neurons.

3.2Integration of Neural models in a NS network

In the third Step mentioned in the above section, the model was interconnected using a simplified diagram according to Wilson, 1980 [68], and shown on Fig 1b. This schematic connectivity gives relevance to the position within the dendritic tree regarding the other connections, the back propagation between MSN, the relation between patches and matrixes, and the type of synapses within the NS: excitatory or inhibitory, plus modulatory DA effects. Following the consideration that the model is focused in the responses of MSN projections as a result of the simulation of PSP selected neuronal types, and under the modulatory effect of dopamine. This simulation is generic and can be changed in the future with the characteristics of the direct or indirect pathway, and the responses obtained can be validated and discussed.

Finally, this network was tuned with the synaptic weights needed for reproducing the operation conditions. The physiologically experimental data available have not considered data analysis processed in real time, but only qualitative analysis of outputs, thus the network model has to be tuned up empirically in cycles of trial and error [87-90].

For the integration of all constructed algorithms, we have used of the leaky integrator neuronal type as our framework [86, 87]. In principle, this proposal does not completely fit into the scope of our model, because of the idea of a dynamic membrane potential obviating the need to model an abundance of ionic channels [88]. Nevertheless, we have been updated these simplified neuronal units with full conductance-modeled neurons instead, with the cost of a high computing-resource need, but with the benefit of having a more reliable interface to compare against biological experiments. The framework then is defined by the rate of an activation change, which may be interpreted as the threshold membrane potential near the axon hillock. Let u be the total post-synaptic potential generated by the afferent input, k a rate constant which depends on the cell membrane capacitance and resistance, and a¯ the equilibrium activation, then:

Where a ≡ da/dt. The output y of the neuron, corresponding to the mean firing rate is a monotonic increasing function of a. It will be bounded below by 0 and above by some maximum value ymax which may be normalized to 1. We have adopted a piecewise linear output function of the form [89]:

The choice of this form for y is motivated by the fact that the equilibrium behavior of the model is then analytically tractable. The activation space of the model is divided into a set of disjoint regions whose individual behavior is linear, and which may be exactly determined [62, 90].

The NS model built this way admits the possibility of local recurrent inhibition. Within each recurrent net, every node is connected to one other by an inhibitory link with weight w. Let the non-zero slope in the output reaction be m, the equilibrium output of the ith node x, and the output threshold ε, then the network equilibrium state is defined by the following set of coupled equations

Now with Jk = maxi (Ji). If w.m ≥ 1, then one integral solution to (12) and (13) is:

Where δiK is the Kronecker delta. This solution may be easily verified by direct substitution.

In order to make contact with the idea of channel salience ci as input, we put Ji = wsci, where ws, is a measure of the overall synaptic efficiency of the MSN, in integrating its inputs. NS is supposed to consist of many recurrent nets of the type defined by eq. (12). Each one processing several channels; the solution in eq. (14) implies however that only those saliences, which are maximal within each network, are contenders for further processing. Now, suppose there are N NS sub-networks (as in patches or matrixes) and let cri be the salience on the ith channel of network r. Let crc(r) = maxi (cri) and P = {crk(r): r=1, ..., N}; therefore, the set of potentially active channels. Now the next step is re-label each member of P with its network index so that each local recurrent network r obeys, at equilibrium a relation of the form expressed in (14) for its maximally salient channel:

DA Modulation. For activating the action of dopamine modulation on MSN, it would be desirable to model the resulting innervation from substantia nigra compacta, and particularly the short-latency DA signals associated with the onset of biologically significant stimuli [19, 28, 91]. The whole operation of BG resides on the basis that these structures operate to release inhibition from desired actions while maintaining or increasing inhibition on undesired actions, somehow affected by the modulation of DA [92-94].

DA synapses occur primarily on the shafts of spines of MSN computationally speaking, this is suggestive of a multiplicative rather than additive process. This can be done by introducing such a multiplicative factor in the synaptic strength ws; assuming by documentation, the excitatory effects in the direct pathway and in inhibitory effects in the indirect pathway. Thus, for direct pathway, the afferent synaptic strength ws is modified to ws (1− λe), where λe means the degree of tonic DA modulation, and obeys 0≤ λe≤ 1. The function in (15) now becomes H [ci− ε/ws(1−λe)]). The equilibrium output xe−i in the ith channel of the indirect pathway is now:

In order to ease notation, we write the up state as Hi ↑ (λe). Similarly in the direct pathway:

Finally, all these sets of equations coupling compartments with all variables (currents, synapses, modulation) were solved by replacing the respective differential equation trough a difference equation that is solved at discrete time intervals [95]. This has been done through a computer neural simulator system over a high demand computer environment. The single neuron simulations have been built in “NEURON” simulator [96], and then migrated and incorporated into a Network running in a “GENESIS” simulator [41, 97]. The latter was preferred because it used implicit methods of numerical integration for accuracy besides its faster numerical capabilities for integration by these methods [72, 78].

4Results and discussion

The running simulation output was processed in real time for graphical visualization of the network activity. The data was passed through a Cartesian plane, representing the position of MSN neurons as triangles and squares. Then a MSN patch represented by the squares were stimulated and scale colored as their membrane potential changed. Some random MSN potential plots where added (four in the video shown in suppmentary material, representing arbitrarily named cell 1, 1 55, 161, 368. International System Units). The cortical stimulus simulated was defined as a 50 “spot flash” applied 20 milliseconds to only the fifth part of the active patch.

For demonstrating the validity of our model, we analyzed our outputs in two phases: in phase 1 we compared the cell units modeled against the most accepted results in biological research [47, 98-100]. The morphology and activity of those are based in the circuit, shown in Fig 2. The effect and parameters simulated in the circuit are shown in table 1. The effect of the PLTS, not entirely characterized yet as an homogeneous population of cells, has been reported to actively participate in the regulation of the balance between excitation and inhibition in cortical circuits to the NS, Beierlein et al. [122], Silberberg and Markram [123] and Kapfer et al. [124] but only evoke a sparse and relatively weak GABAergic IPSCs in MSN [67]. So, we do not have conclusive results on its direct effect on GABAergic MSN. Although is theorized about whether its main function focuses on the modulation of SOM / NPY NOS [68, 69, 70]. Because of that, for purposes of this model are not considered.

Figure 2.

Voltage output graphics simulated against biological models, taken from known and accepted reports. A: Simulated output from MSN. B, MSN from experiments performed by Wilson & Kawaguchi in 1996 [50]. C. Simulated output from FS GABAergic interneurons. D. FS from experiments by Tepper in 2010 [101]. E, Simulated Output from TAN, Cholinergic Inter-neurons. F. Results reported by Bennet et al. in 2010 [102]. In all simulations, ionic environment could be reproduced in the network for the equivalent of 0.5 milliseconds of the biological activity. The graphics B,D and F are not comparatively scaled. With A,C and E.

(0.08MB).
Table 1.

Cell properties used in simulated NS circuitry.

Cell Type  Input Resistance MΩ  Time constant (ms)  Effect on MSN  Dopaminergic effect simulated 
MSN  20-60  5-15  --  Only considered Depolarization 
FS  50-150  7-9  Proximal synapses. Can delay or block completely spiking on PSP  Depolarization and increase of Input resistance 
TAN  71-105  17.8-28  Moderate effect, as modulator of excitability of MSN sensorial activation from cortex (Misgeld 1986)  none 

The main insights of the waves’ morphology of MSN, FS [101] and TAN [102] indicate that neurons could be visualized in the Time/Voltage plots, although some conductance need to be adjusted from the experimental findings to fit the curves obtained from the electrophysiological sets where real neurons are recorded.

The anomalous rectification classically reported in MSN need to be verified in function of the currents modeled [47]. Classically up to six different conductance have been described in MSN [50, 103-105], but data about their respective weights against each other and proper location in the cell compartments, are neither available nor complete [106].

In the state-of-art regarding experimental procedures which have been done with MSN in vitro and in vivo, methodologies that require isolating or blocking each current have been used [107, 108]. Thus, complete model would need the simulation of these six conductances mainly characterized, plus the network parameters selected, which represent a series of variables that are difficult and impractical to analyze as a whole. We have chosen to simplify the model representing the activity of each cell with a leaky integrator function, the procedure for this is explained in supplementary material**. The model used only one projection cell for analysis, leaving the other inactivated for further study. There are different procedures available where some steady values vary such as input resistances or, where time constants are mostly altered by either micro-pipettes, or the type of recording device [109].

Most of the experiments have been carried in different species [110, 28, 54, 85, 86], and tough there is some acceptance on the fact that are equivalent, there is still a lot of variables that must be taken into account for a mathematical model that goes from simple to complex structures, and can deliver a whole output of all these isolated conductances in real time [111-113].

Considering all this background and though the simulated neurons act real enough to be compared to real ones, it is still under consideration whether the encountered differences are due to a variable not considered or to a current or neural integration just not discovered yet. In electrophysiology, reports are still discussed to determine whether the conductances actually characterized are solid enough by themselves or there are still other different interactions to be discovered [114]. The whole picture can be taken with a model that integrates the knowledge really available, that would encourage further use and perfecting of this model. Similarly as MSN’s, in the case of FS and TAN neurons the results were still accurate enough but with more differences derived from the very novel biological data available regarding their function [101, 102, 115].

In phase 2 we analyzed the model’s behavior as a whole network (fig 3a). A video reconstruction of this is available in supplementary material** The main issue at this stage is testing against the biological specimen, because there are no experiments available to compare. Nonetheless, we do have information about field responses and postsynaptic responses, which indirectly have been useful for inferring the activity of the NS. Notwithstanding the lack of biological data to compare against, cortical and sub-cortical waves have been analyzed in many computational models as a Bayesian problem [13, 116], using two-step Karhunen_loeve (KL) decomposition. Briefly, each time-step was split up into a sequence of 10 ms overlapped encoding windows. Within each window, the movie was projected as a point using double KL decomposition in a suitable low dimensional B-space (fig 3b). The sequence of point in the B-space rise a strand, called a β-strand. Each NS wave was represented as a vector-valued time series given by the β-strand, and the detection task operated by DA was to discriminate strands from different combination of modulation status, empirically tuned as explained above. That is how the problem was reduced to a Bayesian Problem. Expanding detection windows (EDW) a sliding detection windows (SDW) where applied over the β-strand. The combination of encoding and decoding windows made it possible to localize the NS target in space as a function related to double-input-time-delay stimuli. This means that this analysis enables to show, in a rather simplified way, activation/no activation of NS network patterns against activation/no activation of DA Modulation (fig 4), demonstrating with it that modulation of DA over the tree neuron network configured in the framework is possible.

Figure 3.

A, Representation of the Simulated Neural Units over a Cartesian plane. The main neuron MSN is segregated in a patch (MSN/P1) and matrix (MSN/P2) region, representing 97% of the whole population.TAN and FS interneurons, representing 5% of the network, are incorporated and connected using a pattern described in the text. In the bottom: Phase trajectories in A-space, product of double KL decomposition. This represents the responses of three different stimuli on the network, and over two different conditions: B without the influence of DA Neuromodulation (top), C, with the influence of DA Neuromodulation. The first decomposition represents a wave as a linear combination of a series of spatial modes with time-varying coefficients.Thus, the wave is adequately represented (as has already been shown by Senseman and Robbins [10]) by a trajectory in a phase space called A-space. Most of the energy contained in the original wave can be captured by the decomposition coefficients corresponding to the first three principal modes. A further reduction of the dimensionality of the wave is achieved by a second KL decomposition which maps the trajectory in A-space into a point in a low-dimension space. A-space is spanned by temporal modes. The data was processed by using windowing techniques, including a sliding encoding window in the wave encoding process and expanding detection window (EDW) and sliding detection window (SDW) techniques in the information decoding process, to estimate the position of stimuli in space (see supplementary material for visualization**).

(0.11MB).
Figure 4.

Detection of error probability (activation rate) as a function of the ending of time windows. A. Detecting the stimuli by EDW approach. B. By using SDW approach. C. With DA modulation, by EDW approach. D. With DA modulation by SDW approach. This analysis shows the overall action of DA modulation over the probabilistic activation of the NS simulated network.

(0.05MB).

The leaky integrator function that is present here is a classical framework that uses simplified neuronal units that are just represented as circuits without considering the operation electric properties of ionic conductances within the cell [117]. It has been used for building proposals of data processing in neural structures [78, 111], but all those mathematical constructs cannot be contrasted against the biological models for feedback because their own information nature and mathematical building are not the same, especially from the point of view where the neural tissue processes information in an analogical manner [83, 90, 118, 119]. For this reason, there are different proposals and alternatives against leaky integrator function elsewhere, specially using the fuzzy integrator technique [7, 120, 121].

The modeling work considered above, can be applied to demonstrate signal selection by the BG, and the proper response of the cells that are mathematically simulated and embedded on it, rather than theoretically apply action selection per se. So that we can convincingly show that the basal ganglia model is able to operate as an effective action selection device, we believe it needs to be embedded in a real time sensory motor interaction with the physical world, or else through a given construct that simulates so.

Acknowledgment

The author thank Posgrado en Ciencias Biologicas of National University of Mexico for the received formation during his postgraduate studies. This work was supported by PAPCA–Iztacala UNAM-2014-2015, and PAPIIT-DGAPA UNAM IN215114 grants.

References
[1]
A.E. Bryson.
Optimal control 1950 to 1980.
IEEE Control Systems, 13 (2006), pp. 26-33
[2]
J.C. Rietschel, R.N. Goodman, B.R. King, L. Lo, J.L. Contreras-Vidal, B.D. Hatfield.
Cerebral cortical dynamics and the quality of motor behavior during social evaluative challenge.
Psychophysiology, 48 (2011), pp. 479-487
[3]
Y. Zhang, K. Chen, M. Baron, M.A. Teylan, Y. Kim, Z. Song, P. Greengard, S.T.C. Wong.
A neurocomputational method for fully automated 3d dendritic spine detection and segmentation of medium-sized spiny neurons.
Neuroimage, 50 (2010), pp. 1472-1484
[4]
A.K. Waljee, P.D.R. Higgins.
Machine learning in medicine: a primer for physicians.
Am. J. Gastroenterol., 105 (2010), pp. 1224-1226
[5]
J. Bower and D. Beeman. Special issue on realistic neural modeling., 2005.
[6]
J.L. Contreras-Vidal, S. Grossberg, D. Bullock.
A neural model of cerebellar learning for arm movement control: cortico-spino-cerebellar dynamics.
Learn. Mem., 3 (1997), pp. 475-502
[7]
S. Pérez, M. Garcés, C. Cabiedes, V. Miranda.
Electronic model of a dubois fuzzy integration neuron.
Journal of Applied Research and Technology, 7 (2009), pp. 73-82
[8]
M. Bañuelos-Saucedo, J. Castillo-Hernández, S. Quintana-Thierry, S. Damián-Zamacona, Valeriano-Assem R. Cervantes, R. Fuentes-González, G. Calva-Olmos, J. Pérez-Silva.
Implementation of a neuron model using fpgas.
Journal of Applied Research and Technology, 1 (2003), pp. 248-255
[9]
R.E. Suri, W. Schultz.
A neural network model with dopamine-like reinforcement signal that learns a spatial delayed response task.
Neuroscience, 91 (1999), pp. 871-8700
[10]
S. Wen, A. Ulloa, F. Husain, B. Horwitz, J.L. Contreras-Vidal.
Simulated neural dynamics of decision-making in an auditory delayed match-to-sample task.
Biol Cybern, 99 (2008), pp. 15-27
[11]
R.E. Suri, J. Bargas, M.A. Arbib.
Modeling functions of striatal dopamine modulation in learning and planning.
Neuroscience, 103 (2001), pp. 65-85
[12]
J. Cuevas-Tello, R. González-Grimaldo, O. Rodríguez-González, H. Pérez-González, O. Vital-Ochoa.
Parallel approach for time series analysis with general regression neural networks.
Journal of Applied Resarch and Technology, 10 (2012), pp. 162-179
[13]
I. Abnizova, A.G. Rust, M. Robinson, R. Te Boekhorst, W.R. Gilks.
Transcription binding site prediction using markov models.
J Bioinform Comput Biol, 4 (2006), pp. 425-441
[14]
G.E. Alexander, M.D. Crutcher.
Neural representations of the target (goal) of visually guided arm movements in three motor areas of the monkey.
J. Neurophysiol., 64 (1990), pp. 164-178
[15]
C.H. Bailey, E.R. Kandel.
Synaptic remodeling, synaptic growth and the storage of long-term memory in aplysia.
Prog. Brain Res., 169 (2008), pp. 179-198
[16]
J.C. Houk.
Informacion processing in modular circuits linking basal ganglia and cerebral cortex.
Models of information processing in de basal ganglia, pp. 3-9
[17]
J.S. Dittman, W.G. Regehr.
Mechanism and kinetics of heterosynaptic depression at a cerebellar synapse.
J. Neurosci., 17 (1997), pp. 9048-9059
[18]
T. Shultz, C. Hansen, T. Shultz.
Response to use of bootstrap procedure and monte carlo simulation.
J. Nutr., 130 (2000), pp. 2619
[19]
A.G. Barto.
Reinforcement learning control.
Curr. Opin. Neurobiol., 4 (1994), pp. 888-893
[20]
A.G. Barto.
Learning by statistical cooperation of self-interested neuron-like computing elements.
Hum Neurobiol, 4 (1985), pp. 229-256
[21]
G. Ma de G, R. J, R. A, O. E, G.A. J, L. S.
Acceleration of association-rule based markov decision processes.
Journal of Applied Research and Technology, 7 (2009), pp. 354-375
[22]
D. Burfoot, M. Lungarella, Y. Kuniyoshi.
Thoward a theory of embodied satatistical learning.
Lecture Notes Of Computer Science, 10 (2008), pp. 270-279
[23]
D. Joel, Y. Niv, E. Ruppin.
Actor-critic models of the basal ganglia: new anatomical and computational perspectives.
Neural Netw, 15 (2002), pp. 535-547
[24]
M. Grosse-Wentrup, J.L. Contreras-Vidal.
The role of the striatum in adaptation learning: a computational model.
Biol Cybern, 96 (2007), pp. 377-388
[25]
J. Brown, D. Bullock, S. Grossberg.
How the basal ganglia use parallel excitatory and inhibitory learning pathways to selectively respond to unexpected rewarding cues.
J. Neurosci., 19 (1999), pp. 10502-10511
[26]
E. Wiesendanger, S. Clarke, R. Kraftsik, E. Tardif.
Topography of cortico-striatal connections in man: anatomical evidence for parallel organization.
Eur. J. Neurosci., 20 (2004), pp. 1915-1922
[27]
Shultz W., Apicella P., Romo R., Scarnati E..
Context-dependente activity in primate striatum reflecting past and future behavioral events.
Models of information processing in de basal ganglia, pp. 10-27
[28]
C.C. Wilson.
The basal ganglia.
The synaptic organization of the brain, Oxford University Press, (2004), pp. 361-413
[29]
R.E. Suri.
Td models of reward predictive responses in dopamineneurons.
Neural Networks, 15 (2002),
[30]
R. Suri, C. Albani, A. Hlatfelder.
A dynamic model of motor basal ganglia functions.
Biological Cybernetics, 76 (1997), pp. 451-458
[31]
Sutton R.S., Barto A.G..
Toward a modern theory of adaptive networks: expectation and prediction.
Psychol Rev, 88 (1981), pp. 135-170
[32]
E. Aimeur, C. Frasson.
Analyzing a new learning strategy according to different knowledge levels..
Computers Educ, 27 (1996), pp. 115-127
[33]
Niv Yael.
Reinforcement learning in the brain.
Journal of Mathematical Psychology, 53 (2009), pp. 139-154
[34]
H.E. Atallah, D. Lopez-Paniagua, J.W. Rudy, R.C. O’Reilly.
Separate neural substrates for skill learning and performance in the ventral and dorsal striatum.
Nat. Neurosci., 10 (2007), pp. 126-131
[35]
I. Bar-Gad, H. Bergman.
Stepping out of the box: information processing in the neural networks of the basal ganglia.
Curr. Opin. Neurobiol., 11 (2001), pp. 689-695
[36]
F. Wörgötter, B. Porr.
Temporal sequence learning, prediction, and control: a review of different models and their relation to biological mechanisms.
Neural Comput, 17 (2005), pp. 245-319
[37]
C.A. Thorn, H. Atallah, M. Howe, A.M. Graybiel.
Differential dynamics of activity changes in dorsolateral and dorsomedial striatal loops during learning.
[38]
W. Ridell, F. Gravetter, W. Rogers.
Further investigation of the relationship between brain indices and learning.
Physiology and Behavior, 17 (1976), pp. 231-274
[39]
S. Crook, D. Beeman, P. Gleeson, F. Howell.
Xml for model specification in neuroscience.
Brains, Minds, and Media, 1 (2005), pp. 228
[40]
A.M. Graybiel.
The basal ganglia.
Trends Neurosci, 18 (1995), pp. 60-62
[41]
A. Parent, L.N. Hazrati.
Functional anatomy of the basal ganglia. i. the cortico-basal ganglia-thalamo-cortical loop.
Brain Res. Brain Res. Rev., 20 (1995), pp. 91-127
[42]
A. Parent, L.N. Hazrati.
Functional anatomy of the basal ganglia. ii. the place of subthalamic nucleus and external pallidum in basal ganglia circuitry.
Brain Res. Brain Res. Rev., 20 (1995), pp. 128-154
[43]
A. Parent.
Extrinsic connedions of the basal ganglia.
TINS, 13 (1990), pp. 254-258
[44]
E.S. Nisenbaum, C.J. Wilson.
Potassium currents responsible for inward and outward rectification in rat neostriatal spiny projection neurons.
J. Neurosci., 15 (1995), pp. 4449-4463
[45]
E.S. Nisenbaum, C.J. Wilson, R.C. Foehring, D.J. Surmeier.
Isolation and characterization of a persistent potassium current in neostriatal neurons.
J. Neurophysiol., 76 (1996), pp. 1180-1194
[46]
Y. Park, K. Kim.
Short-term plasticity of small synaptic vesicle (ssv) and large dense-core vesicle (ldcv) exocytosis.
Cell. Signal., 21 (2009), pp. 1465-1470
[47]
C.J. Wilson, Y. Kawaguchi.
The origins of two-state spontaneous membrane potential fluctuations of neostriatal spiny neurons.
J. Neurosci., 16 (1996), pp. 2397-2410
[48]
J. Bargas, E. Galarraga, J. Aceves.
An early outward conductance modulates the firing latency and frequency of neostriatal neurons of the rat brain.
Exp Brain Res, 75 (1989), pp. 146-156
[49]
A.E. Pollack.
Anatomy, physiology, and pharmacology of the basal ganglia.
Neurol Clin, 19 (2001), pp. 523-534
[50]
A.A. Utter, M.A. Basso.
The basal ganglia: an overview of circuits and function.
Neurosci Biobehav Rev, 32 (2008), pp. 333-342
[51]
M. DeLong, T. Wichmann.
Update on models of basal ganglia function and dysfunction.
Parkinsonism Relat. Disord, 15 (2009), pp. S237-40
[52]
R.L. Albin, A.B. Young, J.B. Penney.
The functional anatomy of basal ganglia disorders.
Trends Neurosci., 12 (1989), pp. 366-375
[53]
C.F. Orr, D.B. Rowe, G.M. Halliday.
An inflammatory review of parkinson’s disease.
Prog Neurobiol, 68 (2002), pp. 325-340
[54]
H. Takahashi, K. Wakabayashi.
The cellular pathology of parkinson’s disease.
Neuropathology, 21 (2001), pp. 315-322
[55]
E.C. Hirsch.
How to judge animal models of parkinson’s disease in terms of neuroprotection.
J. Neural Transm. Suppl., (2006), pp. 255-260
[56]
J.C. Houk, S.P. Wise.
Distributed modular architectures linking basal ganglia, cerebellum, and cerebral cortex: their role in planning and controlling action.
Cereb. Cortex, 5 (1995), pp. 95-110
[57]
Y. Kawaguchi.
Neostriatal cell subtypes and their functional roles.
Neurosci. Res., 27 (1997), pp. 1-8
[58]
V.M. André, C. Cepeda, D.M. Cummings, E.L. Jocoy, Y.E. Fisher, X. William Yang, M.S. Levine.
Dopamine modulation of excitatory currents in the striatum is dictated by the expression of d1 or d2 receptors and modified by endocannabinoids.
Eur. J. Neurosci., 31 (2010), pp. 14-28
[59]
K. Gurney, T.J. Prescott, P. Redgrave.
A computational model of action selection in the basal ganglia. i. a new functional anatomy.
Biol Cybern, 84 (2001), pp. 401-410
[60]
B.D. Bennett, C.J. Wilson.
Spontaneous activity of neostriatal cholinergic interneurons in vitro.
J. Neurosci., 19 (1999), pp. 5586-5596
[61]
C.J. Wilson.
Passive cable properties of dendritic spines and spiny neurons.
J. Neurosci., 4 (1984), pp. 281-297
[62]
P. Pakhotin, E. Bracci.
Cholinergic interneurons control the excitatory input to the striatum.
J. Neurosci., 27 (2007), pp. 391-400
[63]
C.J. Wilson, J.A. Goldberg.
Origin of the slow afterhyperpolarization and slow rhythmic bursting in striatal cholinergic interneurons.
J. Neurophysiol., 95 (2006), pp. 196-204
[64]
O. Ibáñez-Sandoval, F. Tecuapetla, B. Unal, F. Shah, T. Koós, J.M. Tepper.
Electrophysiological and morphological characteristics and synaptic connectivity of tyrosine hydroxylase-expressing neurons in adult mouse striatum.
J. Neurosci., 30 (2010), pp. 6999-7016
[65]
C.J. Wilson, P.M. Groves.
Fine structure and synaptic connections of the common spiny neuron of the rat neostriatum: a study employing intracellular inject of horseradish peroxidase.
J. Comp. Neurol., 194 (1980), pp. 599-615
[66]
T. Prescott, M. Fernando, G. Kevin, H. Mark, R. Peter.
A robot model of the basal ganglia: behavior and intrinsic processing.
Neural Networks, 19 (2006), pp. 31-61
[67]
Z. Nenadic, B.K. Ghosh.
Encoding and decodin of analog signals with a population of neurons.
Math Comp Mod, 39 (2004), pp. 181-196
[68]
R.C. O’Reilly, M.J. Frank.
Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia.
Neural Comput, 18 (2006), pp. 283-328
[69]
Bower, J. Beeman, D. Special issue on realistic neural modeling., 2005.
[70]
A. Hodgkin, A. Huxley.
Propagation of electrical signals along giant nerve fibers.
Proc. R. Soc. Lond., B, Biol. Sci., 140 (1952), pp. 177-183
[71]
A. Hodgkin, A. Huxley.
A quantitative description of membrane current and its application to conduction and excitation in nerve.
J. Physiol. (Lond.), 117 (1952), pp. 500-544
[72]
A. Hodgkin, A. Huxley.
The dual effect of membrane potential on sodium conductance in the giant axon of loligo.
J. Physiol. (Lond.), 116 (1952), pp. 497-506
[73]
A. Hodgkin, A. Huxley.
The components of membrane conductance in the giant axon of loligo.
J. Physiol. (Lond.), 116 (1952), pp. 473-496
[74]
A. Hodgkin, A. Huxley.
Currents carried by sodium and potassium ions through the membrane of the giant axon of loligo.
J. Physiol. (Lond.), 116 (1952), pp. 449-472
[75]
J. Edgerton.
Simulating in vivo-like synaptic input patterns in multicompartmental models.
Brains, Minds and Media, 1 (2005), pp. 225
[76]
W. Rall.
Time constant and electrotonic lenght of membrane cilynders and neurons.
Biophysics, 9 (1969), pp. 1483-1508
[77]
W. Rall..
Cable theory for neurons.
Handbook of physiology: the nervous system, pp. 39-98
[78]
Y. Thuboshita, O. Hiroshi.
Context-dependent retrieval of information by neural-network dynamics with continuous attractors.
Neural Networks, 20 (2007), pp. 705-713
[79]
F. Pongrácz.
The function of dendritic spines: a theoretical study.
Neuroscience, 15 (1985), pp. 933-946
[80]
M. Bota, M.A. Arbib.
Integrating databases and expert systems for the analysis of brain structures: connections, similarities, and homologies.
Neuroinformatics, 2 (2004), pp. 19-58
[81]
A.L. Hodgkin.
A note on conduction velocity.
J. Physiol. (Lond.), 125 (1954), pp. 221-224
[82]
Z. Qi, G.W. Miller, E.O. Voit.
The internal state of medium spiny neurons varies in response to different input signals.
BMC Syst Biol, 4 (2010), pp. 26
[83]
B. Dulam-Banawa, A. Marin-Sanguino, E. Mendoza.
The evolution of synapse models--from numbers to networks to spaces.
Pharmacopsychiatry, 43 (2010), pp. S42-9
[84]
M. Arbib.
The handbook of brain theory and neural networks, pp. 4-11
[85]
W. Yamada, C. Kock, P. Adams.
Multiple channels and calcium dynamics.
Methods on neuronal modeling, pp. 97-134
[86]
K. Gurney, T.J. Prescott, P. Redgrave.
A computational model of action selection in the basal ganglia. ii. analysis and simulation of behaviour.
Biol Cybern, 84 (2001), pp. 411-423
[87]
K.N. Gurney.
Information processing in dendrites ii. information theoretic complexity.
Neural Netw, 14 (2001), pp. 1005-1022
[88]
N. Ray, A.P. Strafella.
Dopamine, reward, and frontostriatal circuitry in impulse control disorders in parkinson’s disease: insights from functional imaging.
Clin EEG Neurosci, 41 (2010), pp. 87-93
[89]
J. Barral, E. Galarraga, D. Tapia, E. Flores-Barrera, A. Reyes, J. Bargas.
Dopaminergic modulation of spiny neurons in the turtle striatum.
Cell. Mol. Neurobiol., (2010),
[90]
H. Zhang, E.W. Rodgers, W.C. Krenz, M.C. Clark, D.J. Baro.
Cell specific dopamine modulation of the transient potassium current in the pyloric network by the canonical d1 receptor signal transduction cascade.
J. Neurophysiol., (2010),
[91]
W. Schultz.
Subjective neuronal coding of reward: temporal value discounting and risk.
Eur. J. Neurosci., (2010),
[92]
G. Hood.
Using p-genesis for parallel simulation of genesis models.
Brains, Minds and Media, 1 (2005), pp. bmm227
[93]
M. Hines, N. Carnevale.
Recent developments in neuron.
Brains, Minds and Media, 1 (2005), pp. bmm221
[94]
R. Goering, “Matlab edges closer to electronic design automation world,” in, ed, 2004, ch, pp.
[95]
T.E. Hazy, M.J. Frank, R.C. O’Reilly.
Neural mechanisms of acquired phasic dopamine responses in learning.
Neurosci Biobehav Rev, 34 (2010), pp. 701-720
[96]
C.R. Gerfen.
The neostriatal mosaic: multiple levels of compartmental organization.
Trends Neurosci., 15 (1992), pp. 133-139
[97]
Z. Qi, G.W. Miller, E.O. Voit.
Computational modeling of synaptic neurotransmission as a tool for assessing dopamine hypotheses of schizophrenia.
Pharmacopsychiatry, 43 (2010), pp. S50-60
[98]
J.M. Tepper, F. Tecuapetla, T. Koós, O. Ibáñez-Sandoval.
Heterogeneity and diversity of striatal gabaergic interneurons.
Front Neuroanat, 4 (2010), pp. 150
[99]
B. Ben D, C. Joseph C, W. Charles J.
Intrinsic membrane properties underlying spontaneous tonic firing in neostriatal cholinergic interneurons.
Journal of Neuroscience, 15 (2000), pp. 8493-8503
[100]
Y. Kawaguchi, C.J. Wilson, P.C. Emson.
Intracellular recording of identified neostriatal patch and matrix spiny cells in a slice preparation preserving cortical inputs.
J. Neurophysiol., 62 (1989), pp. 1052-1068
[101]
D.J. Surmeier, J. Bargas, S.T. Kitai.
Two types of a-current differing in voltage-dependence are expressed by neurons of the rat neostriatum.
Neurosci. Lett., 103 (1989), pp. 331-337
[102]
J. Bargas, E. Galarraga, J. Aceves.
Electrotonic properties of neostriatal neurons are modulated by extracellular potassium.
Exp Brain Res, 72 (1988), pp. 390-398
[103]
J.M. Tepper, C.J. Wilson, T. Koós.
Feedforward and feedback inhibition in neostriatal gabaergic spiny neurons.
Brain Res Rev, 58 (2008), pp. 272-281
[104]
C. Lettieri, S. Rinaldo, G. Devigili, G. Pauletto, L. Verriello, R. Budai, L. Fadiga, A. Oliynyk, M. Mondani, S. D’Auria, M. Skrap, R. Eleopra.
Deep brain stimulation: subthalamic nucleus electrophysiological activity in awake and anesthetized patients.
Clin Neurophysiol, (2012),
[105]
T. Koos, J.M. Tepper, C.J. Wilson.
Comparison of ipscs evoked by spiny and fast-spiking neurons in the neostriatum.
J. Neurosci., 24 (2004), pp. 7916-7922
[106]
M. Littel, A.S. Euser, M.R. Munafò, I.H.A. Franken.
Electrophysiological indices of biased cognitive processing of substance-related cues: a meta-analysis.
Neurosci Biobehav Rev, (2012),
[107]
G. Alexander, M. Crutcher.
Functional architecture of basal ganglia circuits: neural substrates of parallel processing.
Trends Neurosci., 13 (1990), pp. 266-271
[108]
G.F. Mitchell.
Clinical achievements of impedance analysis.
Med Biol Eng Comput, 47 (2009), pp. 153-163
[109]
D.Z. Jin, N. Fujii, A.M. Graybiel.
Neural representation of time in cortico-basal ganglia circuits.
Proc. Natl. Acad. Sci. U.S.A., 106 (2009), pp. 19156-19161
[110]
D. Jaeger.
Realistic single cell modeling - from experiment to simulation.
Brains, Minds and Media, 1 (2005), pp. bmmm222
[111]
D. Choquet.
Fast ampar trafficking for a high-frequency synaptic transmission.
Eur. J. Neurosci., (2010),
[112]
D. x, G. B K, U. P.
Encoding and decodign target locations with waves in the turtle visual cortex.
IEEE trans. Biomed. Eng., 52 (2005), pp. 566-577
[113]
T.J. Prescott, F.M. Montes González, K. Gurney, M.D. Humphries, P. Redgrave.
A robot model of the basal ganglia: behavior and intrinsic processing.
Neural Netw, 19 (2006), pp. 31-61
[114]
R.B. Gillespie, J.L. Contreras-Vidal, P.A. Shewokis, M.K. O’Malley, J.D. Brown, H. Agashe, R. Gentili, A. Davis.
Toward improved sensorimotor integration and learning using upper-limb prosthetic devices.
Conf Proc IEEE Eng Med Biol Soc, 2010 (2010), pp. 5077-5080
[115]
C.H. Bailey, M. Giustetto, Y.Y. Huang, R.D. Hawkins, E.R. Kandel.
Is heterosynaptic modulation essential for stabilizing hebbian plasticity and memory?.
Nat. Rev. Neurosci., 1 (2000), pp. 11-20
[116]
J.M. Fellous, C. Linster.
Computational models of neuromodulation.
Neural Comput, 10 (1998), pp. 771-805
[117]
G.A. Carpenter, S. Grossberg, J.H. Reynolds.
A fuzzy artmap nonparametric probability estimator for nonstationary pattern recognition problems.
IEEE Trans Neural Netw, 6 (1995), pp. 1330-1336
[118]
Beierlein M., Gibson J.R., Connors B.W..
Two dynamically distinct inhibitory networks in layer 4 of the neocortex.
J Neurophysiol, 90 (2003), pp. 2987-3000
[119]
Silberberg G., Markram H..
Disynaptic inhibition between neocortical pyramidal cells mediated by Martinotti cells.
[120]
Kapfer C., Glickfeld L.L., Atallah B.V., Scanziani M..
Supralinear increase of recurrent inhibition during sparse activity in the somatosensory cortex.
Nat Neurosci, 10 (2007), pp. 743-753

Supplementary Material Available online in http://www.academs.mx/jart605

Copyright © 2014. Universidad Nacional Autónoma de México
Article options