Results for the SENSE project          (document)

1-       A brief introduction

The SENSE project aims at designing a novel bio-inspired neuromorphic embedded system that mimics the principles of human vision from the sensor (retina) to high-level computation algorithms, through the application of the last advances in neural coding. Classical sensors are no more valid in our model. We need information/signals as provided in the mammalian eye. This is the first step to really apply the biological model in the preprocessing layer. Hence, this project originates from human retina that is one of the most well understood parts of the human vision system being itself one of the most studied cortical areas. Retina is valuable to understand how information is sensed and processed by complex neural circuits in the brain. Indeed, retina is part of the central nervous system and has a synaptic organization similar to those of other central neural structures. Moreover, the retina contains only five major classes of neurons interconnected in a complex fashion but with an orderly, layered anatomical arrangement making this organ relatively simple compared with other brain regions.

Based on the recently proposed sparse neural model named ENN, we propose to design a radically different vision system composed of three layers: perception, pre-processing and computation. SENSE aims at a paradigm shift from conventional CMOS/CCD sensors followed by traditional image processing step towards a vision system inspired by human vision. Such tightly integrated and multi-layered smart vision system will be organized into a stacked architecture that will use complex and quite understandable ENN models at each level conferring unexpectedly powerful image processing. To be specific, in the human brain, visual perception begins in the retina and occurs in two stages. Light rays entering the eye are converted into electrical signals by specialized sensory organs (retina photoreceptors i.e. cones and rods and then retina neurons i.e. bipolar, horizontal, amacrine and ganglion cells). This first step will be realized by designing a smart CMOS sensor embedding on-chip analog ENN model. These electrical signals are then sent through the optic nerve to higher centers in the brain for further processing necessary for perception (i.e. lateral geniculate nucleus located in the Thalamus, primary visual cortex, higher visual cortical areas and other brain cortical areas). This second step will be realized by designing a dedicated digital architecture implementing a hierarchical ENN in order to further abstract visual information (color hue, color saturation, color brightness, edge detection...). Finally, in order to make the SENSE vision system tunable to target specific image processing applications, a software ENN model coupled with original and dedicated image processing algorithms will be executed on a programmable many-core architecture. In addition, an artificial learning process will be defined to further enhance the flexibility of the proposed vision system.


2-       Results for the Analog step

The architecture of the analog ENN is in fact mixed. A single 30-synapse 128-fanal cluster has been implemented. The implemented decision rule is the winner-takes-all (WTA). To design a clique-based network, the analog circuit is meant to be controlled by a programmable digital circuit (FPGA). These fanals are used to define the analog neural network.

The CMOS 65nm analogue encoded-neural network was sent to the foundry in late 2015. The 25 encapsulated dies were received in April 2016.

(cf our final report)


The first tests with this chip has been performed (and published) with this test configuration. We perform automatic recognition of ElectroenCephaloGram (ECG) decomposed in 5x5 pixels patterns. In each pattern, 2 pixels have been erased (results obtained in 406 cycles, 10MHz). Our first experiments show that we achieve 90% of good reconstruction with our chip.


These first results could be improved with an enhanced ENN model.


3-       Results for the Digital step

A. The original GBNN model has been renamed ENN (Encoded Neural Network) but what are we talking about?

An ENN is an abstract neural network model based on sparse clustered networks that can be used to design associative memories. The principle of ENN is to gather sets of neurons in clusters. Neurons (also called fanals) that belong to the same cluster cannot be connected to each other (this principle is called sparcity). However, any neuron of a given cluster can be connected to any neuron in any other cluster. More precisely, the network consists of N binary neurons arranged into C equally-partitioned clusters and each cluster includes L=N/C neurons. Each cluster is associated through one of its neurons with a portion of an input message to be learned or retrieved. A message m of K bits is thus divided into C clusters each with L fanals and the length of the sub-message associated with each cluster is X = K/C = log2(L).

During the learning phase, the network memorizes that the set of activated neurons (i.e. the set of neurons that constitute the input message) are connected to each other and form a clique. Unlike Hopfield networks, ENN uses binary weighted connections between neurons to record if a connection exists or not. Then, to memorize a connection that exists between one neuron i of cluster jni,j  and one neuron k from cluster g nk,g (with j≠g), each neuron stores locally the value ‘1’ in its corresponding synaptic weight w (i.e. w(i,j)(k,g)=1 in cluster j and w(k,g)(i,j)=1 in cluster g). All the weights are initialized to 0, i.e. no message has been learnt before training.

The retrieving process is based on a Scoring step and a Winner Takes All (WTA) step to detect which neuron, in a cluster associated to a missing part of the message, is the most “stimulated” one. Equation (1) defines the original scoring function used to compute the “score” of a neuron ni,j at time instance t+1, st+1ni,j. This score depends on all the values of the other neurons k from all the other clusters g (i.e. nk,g) in the ENN computed at time instant t (i.e. in the previous iteration of the network) and on the corresponding synaptic weights (i.e. value of w(k,g)(i,j)) that have been stored during the learning process.

(cf our final report for the equation 1)

The WTA equation (2) allows defining in each cluster cthe neuron ni,j or the group of neurons to activate i.e. the neuron or the group of neurons that achieves the maximum score Smax.

(cf our final report for the equation 2)

The network converges in a few time instances i.e. iterations. At the end of the process, the clusters which originally had no selected neuron are provided with the selection of a neuron or a group of neurons. The answer of the network is then defined by the set of neurons that were chosen to represent each single cluster.

The next figure presents an ENN neural network based on 3 clusters of 3 neurons. Let us consider that the network has learned three messages represented by cliques: (n1,0, n0,1, n0,2), (n2,0, n1,1, n0,2) and (n2,0, n2,1, n0,2). These cliques are represented by using a binary synaptic-weight matrix (also called “Interconnection matrix”) thanks to a classical representation of adjacency matrix as depicted in this figure. Each line of the matrix contains the weights storing the existence -or not- of connections between the neurons of a cluster and the neurons of other clusters. There is no element in the diagonal since the neurons of a given cluster cannot be connected between them. Note that this means that these connections are not represented in the synaptic weights matrix (i.e. the diagonal of the matrix is missing). For example, the message (n1,0, n0,1, n0,2is memorized in the matrix through weights:

(cf our report for the example)

Next, if a partial message (_, n1,1, n0,2) is presented to the network, where _ denotes a missing symbol (i.e. sub-message associated to cluster 0 is missing), then the network must take a decision. The values of the known neurons are first activated (i.e. the values of the neurons associated to known sub-messages are set to 1) and the values of all the neurons are then broadcasted through the network. At the end of the scoring step, neurons n1,0 andn2,0 in cluster c0 have a score of 1 and 2 respectively. Indeed, neuron n2,0 receives two non-null values since it is linked to two active neurons (i.e. neuronsn1,1 and n0,2) while neuron n1,0 receives only one non null value (from neuron n0,2). Hence, at the end of this iteration, neuron n2,0 will be selected as the activated neuron by the Winner Take All algorithm.


B. What about ENN based architecture for SENSE?

First of all, it appears that the classical version of the ENN model was not smart enough to meet our expectations. This was not new for us, since we originally planned to explore the possible evolutions of the ENN model in order to be able to use it in the SENSE vision system.

However, our work in order to explore and to define these new formal aspects (and associated hardware architectures) has been performed thanks to another PhD thesis (i.e. not funded by the SENSE project). This ENN model offers a storage capacity exceeding the one issued from Hopfield networks when the information to be stored has a uniform distribution. Methods improving performance for non-uniform distributions and hardware architectures implementing the ENN networks were proposed. However, on one hand, these solutions are very expensive in terms of hardware resources and on the other hand, the proposed architectures can only implement networks with fixed sizes and they are not scalable. The objectives of this thesis are: (1) to design ENN inspired models outperforming the state of the art, (2) to propose architectures cheaper than existing solutions and (3) to design a generic architecture implementing the proposed models and able to handle various sizes of networks.

The results of these works are : (1) the concept of clone based neural networks and its variants has been explored and the proposed Clone-based ENN offers better performance than the state of the art for the same memory cost,  even when a non-uniform distribution of the information to be stored is considered. The hardware architecture optimizations have also been introduced and they demonstrate significant cost reduction in terms of resources. Finally, a generic scalable architecture able to handle various sizes of networks is proposed.


C. How to use ENN based architecture for vision?

To design embedded visual systems, two complementary axes can be considered. The first is the conception of new hardware architectures able to efficiently implement vision algorithms. This task has been explored in collaboration with a post-doctorant. The second is the conception of algorithms that are less resource hungry in terms of computations and storages.

In our work, we propose less complex models for visual processing based on our ENN and efficient numeric architectures to implement them. The models we are working on are connectionist models. They are computing models based on networks of small processing elements inspired by biological neural networks. Our work is then divided into two parts, each one targeting a specific task of computer vision.

In the first part we consider the nearest neighbor search. Given a query vector, the goal is to retrieve in a large set of vectors the closest ones using a given metric like Euclidean distance. It is a well-known function in computer vision and it is used for applications like image retrieval, descriptor matching and non-parametric classification. Depending of the application, a vector can be a visual descriptor or a set of features representing an image. To perform that task, we improve our existing model of associative memory (based on ENN). An associative memory is a storage system where the stored data are accessed using their content or a noisy version of it rather than an index. That model uses binary neurons and connections and offers a large storage capacity. Those properties allow design energy efficient systems. The original model cannot be used for the nearest neighbor search. Therefore, we propose several evolutions of the original model to perform nearest neighbor search and we evaluate the impact of these evolutions on the hardware architectures.

In the second part, we consider the problem of image classification. Image classification consists in associating an index to an image according to their content. In terms of model, we use deep convolutional neural networks (DNNs) which are the models that own the highest performance regarding the classification accuracy. While DNNs give the best classification accuracy, they are resource hungry (memory and computation) and cannot be implemented on embedded systems. Several works solve that problem by compressing the network after training it. For that, they use several methods like vector quantization after the training phase. The compression process reduces the resources required by the network but it also leads to a degradation of the performance. To reduce the complexity of such network while maintaining high classification accuracy we propose to tightly couple the compression step with the training phase, in a DNN using on our enhanced ENN model.


4-       Results for the Software step

Initially, the ENN model has been validated on randomly generated messages, following a uniform distribution (RGU). But, since we are looking to work with images, the distribution becomes non uniform and implies a new challenge called ambiguity.

Indeed, learning non uniform data such as images, or randomly generated data following a Gaussian distribution (RGNU) with an ENN implies that few neurons will support all the information in its memory. Hence, the retrieval phase will not be able to dissociate properly which neurons should be activated in order to represent the correct message, it is an ambiguity. The last publications in the literature rely on the reduction of the ambiguities given RGNU data. The contributions done by IRISA was also on how to reduce the ambiguities and how to let the ENN generalize its information. We thus list the contributions by their problematic.



A. Ambiguities

A first step was to dissociate learnt messages by expanding the network in layers. By generalizing the expansion, the model is able to get better results than the original ENN with RGU and RGNU data.

Another contribution was to use the structural form of a message to be used as constraint in the retrieval phase. Indeed, since the output must be a clique, by its definition, all the neurons are fully connected, hence their number of connection, ie their degree, is the same. Therefore we define a penalization term in order to let the neurons to be activated if and only if they are fully connected to others with the same degree needed to be a clique. We used this penalization on images giving compelling results and it can even eradicate the ambiguities in specific cases. This has led to a publication in the international conference ESA on eadaifoe ESA Dsed ium imaAficial learal Network) bs)p>


B. WhaGralizingon&nbtrong>

A fle DNNse evohods likcrelate th,d effi clused Ele ambiguities inhe network memscort use le to tarord al e lwn funsages bye useign enas cw chael of asswork membe auild abstract ne of the ambsages by rnt men order to makhelpe ENN genget balize its information. We is a wled ambBidirion, iCMOSue. Wsed on Aciative memory (basBCAM)he implially,rsion of ismposed of thr2yers. Bnnected betwybinardirion, iCMOrix thhe first is ers. ismposed of thrsub Cb givociated to knoh patterchfrom humacomposed in 5e. To p hasers. l not-alt neu firsures reped on els h the infoms bye T, to msecond parers. l not-ded Ne contritions between the neuterchfroget betabsur"> vel conimagrmation in al e infoms bye T,is the to consre the ambgralizingon

The resuse thiy if ory (badissoage processing algdirionon the redels havioneiciens acomeause this istrmenn. Thece, t definn enas cwfetrik with imaENN impwork witexistlt nefor ures repend c Auto-ded Nre T,is ral network basismposed of thr3yers. Bn able at tarord al ethe Thaput musers. infoms bom the senut mesers. the netstlt nefor ures repthe humaiddenyer. Henused t msecose neural modivated s in the litaiddenyer. H a synae. W theie stooded-ne any oN imy gmeause thewtorubmitgradatidata fohe analuto-ded Nrkcrelatd al e lt uspecific tasr of theradation ofur wore a a wbe use w we achrnt mesm the senivated s iring the learning proce, in rodstd al ese of end c imy gmesce it is lin le to tar balize it vecBCAM used fortancea/p>


ong>B. Classicalation acm>

A fithe nesontributions by e a tarord al ethmation in m humassociative memory is then retblassificatbetwybinssificatbrch as imaaftwarmaxy gmes ENN gencclass="s;">ta folits deelf.ring the lea-ded in eac prowork me'emory. H are lated ipartialr the Sofoms by'yerb. They aassification accione by IRIord al this orgt pubsing part of the mesthmation in ng their N ge'srrect me matrec thic/p>

In sp;

ong>A brist testfor perSE proVon andSems. ?>

The ascorteas been performed (anthis meae ine is ntaiconnects of&nbe internagion of the netferent visges. L the netSE protem and has probletion of theSE vision system tunbletoe-1"lso to h an enhariments sallidated of the perfletoe-1" sparodegt conlications lip>

Anothis meages. ,e ambgraliiy exc rodstusaity of the prosor emb perferent vis ures rtstlt ne and it ion sysk, wsst be a cluate thnd procible evoleme outs ins the Sofign a geconnecuration. W the netferent visers. Bnthe perfhyal vertartware ENNhitectures chead be actposed.

alt="SEN"ass="poraui-rng an-ment in rc="/imp://"},/ce-thr.png> alt="SEN"ass="poraui-rng an-ment in rc="/imp://"},/ce-thr.png> alt="SEN"ass="poraui-rng an-ment in rc="/imp://"},/ce-thr.png> alt="SEN"ass="poraui-rng an-ment in rc="/imp://"},/ce-thr.png> v>
discuon static-usssages ar/div>?p_hTok=4sp8J";Lp;t=1_id_56=INSTANCE_1679DbLtZXGU_" p;t=1_id_lrayles, =1p;t=1_id_te of=nal asp;t=1_id_el, =viewp;t=1_id_cod=516umn-1"> p;t=1_id_cod=ad nt=1p;t=1_INSTANCE_1679DbLtZXGU_" >uctuts_ ne ant%2Frnal-co_tent. I%2Fedit_icle"> >discuon st"ass="poraui-m andd="bre_INSTANCE_1679DbLtZXGU_" >fm"ohods lortlst"med Ere_INSTANCE_1679DbLtZXGU_" >fm"o/a> t mesed Ere_INSTANCE_1679DbLtZXGU_" >f anDa hree="texaidden"lue (f="1506093249"},g t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >domly Nd Ece-th" =ed Ere_INSTANCE_1679DbLtZXGU_" >domly Nd Ece-th" ====e="texaidden"lue (f="vrmh_" == t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >cmd" =ed Ere_INSTANCE_1679DbLtZXGU_" >cmd" ====e="texaidden"lue (f="" == t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >redirion" =ed Ere_INSTANCE_1679DbLtZXGU_" >dedirion" ====e="texaidden"lue (f="p://;jsessionid=13FB538856B007DB00EC3E3228E88A4A" >3f;bro_id_563d;13FINSTANCE_1679DbLtZXGU_" p6;t&#_id_lrayles, 3d;13F0p6;t&#_id_te of3d;13Fnal asp6;t&#_id_el, 3d;13Fviewp6;t&#_id_cod=513d;css&mn-1"> p6;t&#_id_cod=ad nt3d;13FB" == t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >cent. IURL" =ed Ere_INSTANCE_1679DbLtZXGU_" >cent. IURL" ====e="texaidden"lue (f="p://;jro_id_563d;13FINSTANCE_1679DbLtZXGU_" p6;t&#_id_lrayles, 3d;13F0p6;t&#_id_te of3d;13Fnal asp6;t&#_id_el, 3d;13Fviewp6;t&#_id_cod=513d;css&mn-1"> p6;t&#_id_cod=ad nt3d;13FB" == t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >es otEal yVble si" =ed Ere_INSTANCE_1679DbLtZXGU_" >es otEal yVble si" ====e="texaidden"lue (f="e" cla== t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >cs="pN" tit=ed Ere_INSTANCE_1679DbLtZXGU_" >cs="pN" tit====e="texaidden"lue (f="inl2e;eu&lray.Po2e;eu&tlet-bo2e;eu&rnal-co2e;eu&el, w2e;eu&Jnal-coAcle"> t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >cs="pPKit=ed Ere_INSTANCE_1679DbLtZXGU_" >cs="pPKit====e="texaidden"lue (f="649"32 t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >persing onCs="pN" tit=ed Ere_INSTANCE_1679DbLtZXGU_" >persing onCs="pN" tit====e="texaidden"lue (f="inl2e;eu&lray.Po2e;eu&tlet-bo2e;eu&rnal-co2e;eu&el, w2e;eu&Jnal-coAcle"> t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >persing onCs="pPKit=ed Ere_INSTANCE_1679DbLtZXGU_" >persing onCs="pPKit====e="texaidden"lue (f="649"32 t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >persing onOwnd&#xit=ed Ere_INSTANCE_1679DbLtZXGU_" >persing onOwnd&#xit====e="texaidden"lue (f="61248 t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >sages a#xit=ed Ere_INSTANCE_1679DbLtZXGU_" >sages a#xit====e="texaidden"lue (f="" == t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >ee mad#xit=ed Ere_INSTANCE_1679DbLtZXGU_" >ee mad#xit====e="texaidden"lue (f="649"36 t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >pat viMages a#xit=ed Ere_INSTANCE_1679DbLtZXGU_" >pat viMages a#xit====e="texaidden"lue (f="" == t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >y"> y"> t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >k meflowAne ank meflowAne anmd ne of ss="poraui-md ne of addmponm " id="p_p_INSTANCE_1679DbLtZXGU_" >vrmh_sages aScled 0
t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >sages a#x0it=ed Ere_INSTANCE_1679DbLtZXGU_" >sages a#x0it====e="texaidden"lue (f="649"35 t messs="poraui-md ne- t mesaui-md ne- t me-t-al" ="bre_INSTANCE_1679DbLtZXGU_" >pat viMages a#x0it=ed Ere_INSTANCE_1679DbLtZXGU_" >pat viMages a#x0it====e="texaidden"lue (f="649"35v>vrmh_tlstReplyB"> 0');">Bee first pa.> emailAddsionit=ed Ere_INSTANCE_1679DbLtZXGU_" >emailAddsionit====e="texaidden"lue (f="" == id="mycvrmh_tlstReplyFal 0tyle=""> display:n une vrmh_tlstReplyB"> 0ospConm " ivrmh_tlstReplyB"> 0ot=ed Ere_INSTANCE_1679DbLtZXGU_" >plstReplyB"> 0ot=====pper="twartyle=""> ght="10.0p;">< span> n> n> vrmh_tlstReplyButton0ot=//h1>n> n> vrmh_tlstReplyB"> 0').ue (fr= ''; void('');"=e="te'button'lue (f="Ced Elot===//h1>n> n>

spanptor e="text-al/javaiptor ">Lray.Po.Util.addIt meFmens();Lray.Po.Plet-bo.r lt mePlet-boIds=["">ptor /dinptor e="text-al/javaiptor ">ction in vrmh_er thLogin(b,a){umentfm._INSTANCE_1679DbLtZXGU_" >emailAddsion.ue (f=b;if(a){_INSTANCE_1679DbLtZXGU_" >uendMages a(umentfm)}else{_INSTANCE_1679DbLtZXGU_" >uendMages a(umentfm,e" c)}}ction in vrmh_dmenbrMages a(b){var a=umentfm[e_INSTANCE_1679DbLtZXGU_" >sages a#xi+b].ue (f;umentfm._INSTANCE_1679DbLtZXGU_" >cmd.ue (f="dmenbr";umentfm._INSTANCE_1679DbLtZXGU_" >sages a#x.ue (f=a1_INSTANCE_1679DbLtZXGU_" >uendMages a(umentfm)}ction in vrmh_tlstReply(b){var c=umentfm[e_INSTANCE_1679DbLtZXGU_" >pat viMages a#xi+b].ue (f;var a=umentfm[e_INSTANCE_1679DbLtZXGU_" >tlstReplyB"> i+b].ue (f;umentfm._INSTANCE_1679DbLtZXGU_" >cmd.ue (f="add";umentfm._INSTANCE_1679DbLtZXGU_" >pat viMages a#x.ue (f=c;umentfm._INSTANCE_1679DbLtZXGU_" >y"> .ue (f=a1if(!me/imDisplay.isS enasIn()){window.nd Ece-thre_INSTANCE_1679DbLtZXGU_" >";window.domly Nd Ece-th="vrmh_";Lray.Po.Util.openWindow({digue :{gn: juLray.Po.Util.Window.ALIGN_CENTER,modal:e" c},id:e_INSTANCE_1679DbLtZXGU_" >n a InDigue ",le="G:"\u0053\u0069\u0067\u006e\u0020\u0049\u006e",uri:tp://;jsessionid=13FB538856B007DB00EC3E3228E88A4A" >?p_d_56=164&_id_lrayles, =0&_id_te of=pop_up&_id_el, =viewp_id_cod=516umn-1"> p_id_cod=ad nt=1psaveLastPath=0&79D4>uctuts_ ne ant%2Ficaln%2Ficaln"})}else{_INSTANCE_1679DbLtZXGU_" >uendMages a(umentfm)}}ction in vrmh_scled IntoView(a){umentfm._INSTANCE_1679DbLtZXGU_" >cmd.ue (f="wubiptobe_to_plets in"}else{umentfm._INSTANCE_1679DbLtZXGU_" >cmd.ue (f="unwubiptobe_m hu_plets in"}_INSTANCE_1679DbLtZXGU_" >uendMages a(umentfm)}ction in vrmh_uped oMages a(c,d){var b=umentfm[e_INSTANCE_1679DbLtZXGU_" >sages a#xi+c].ue (f;var a=umentfm[e_INSTANCE_1679DbLtZXGU_" >editReplyB"> i+c].ue (f;if(d){umentfm._INSTANCE_1679DbLtZXGU_" >k meflowAne an.ue (f=2}umentfm._INSTANCE_1679DbLtZXGU_" >cmd.ue (f="uped o";umentfm._INSTANCE_1679DbLtZXGU_" >sages a#x.ue (f=b;umentfm._INSTANCE_1679DbLtZXGU_" >y"> .ue (f=a1_INSTANCE_1679DbLtZXGU_" >uendMages a(umentfm)}Lray.Po.vided w(window,e_INSTANCE_1679DbLtZXGU_" >nendMages a",ction in(d,c){var e=Lray.Po.Util;var a=AUI();d=a.une(d);var b=u.all(".aui-button-plets inut me"); cl(u.attr(" ne an"),{a foT="t:"jsan",f an:{id:d},on:{plexitte:ction in(f,h,g){e.togglmDise tod(b,false)},failure:ction in(f,h,g){_INSTANCE_1679DbLtZXGU_" >n thSic-usMages a("error","\u0059\u006f\u0075\u0072\u0020\u0072\u0065\u0071\u0075\u0065\u0073\u0074\u0020\u0066\u0061\u0069\u006c\u0065\u0064\u0020\u0074\u006f\u0020\u0063\u006f\u006d\u0070\u006c\u0065\u0074\u0065\u002e")},gesrt:ction in(){e.togglmDise tod(b,e" c)},sussed :ction in(i,k,j){var g=s"pondinsfD fo");var h=g.eedion of1if(!h){ th("INSTANCE_1679DbLtZXGU_" :sages aPod th",ction in(l){_INSTANCE_1679DbLtZXGU_" >onMages aPod th(g,c)});"INSTANCE_1679DbLtZXGU_" :sages aPod th",g)}else{var f="";if(h.ex toOf("Mages aB"> Eedion of")>-1){f="\u0050\u006c\u0065\u0061\u0073\u0065\u0020\u0065\u006e\u0074\u0065\u0072\u0020\u0061\u0020\u0076\u0061\u006c\u0069\u0064\u0020\u006d\u0065\u0073\u0073\u0061\u0067\u0065\u002e"}else{if(h.ex toOf("Noh tiMages aEedion of")>-1){f="\u0054\u0068\u0065\u0020\u006d\u0065\u0073\u0073\u0061\u0067\u0065\u0020\u0063\u006f\u0075\u006c\u0064\u0020\u006e\u006f\u0074\u0020\u0062\u0065\u0020\u0066\u006f\u0075\u006e\u0064\u002e"}else{if(h.ex toOf("Pciple alEedion of")>-1){f="\u0059\u006f\u0075\u0020\u0064\u006f\u0020\u006e\u006f\u0074\u0020\u0068\u0061\u0076\u0065\u0020\u0074\u0068\u0065\u0020\u0072\u0065\u0071\u0075\u0069\u0072\u0065\u0064\u0020\u0070\u0065\u0072\u006d\u0069\u0073\u0073\u0069\u006f\u006e\u0073\u002e"}else{if(h.ex toOf("Rired byMages aEedion of")>-1){f="\u0059\u006f\u0075\u0020\u0063\u0061\u006e\u006e\u006f\u0074\u0020\u0064\u0065\u006c\u0065\u0074\u0065\u0020\u0061\u0020\u0072\u006f\u006f\u0074\u0020\u006d\u0065\u0073\u0073\u0061\u0067\u0065\u0020\u0074\u0068\u0061\u0074\u0020\u0068\u0061\u0073\u0020\u006d\u006f\u0072\u0065\u0020\u0074\u0068\u0061\u006e\u0020\u006f\u006e\u0065\u0020\u0069\u006d\u006d\u0065\u0064\u0069\u0061\u0074\u0065\u0020\u0072\u0065\u0070\u006c\u0079\u002e"}else{f="\u0059\u006f\u0075\u0072\u0020\u0072\u0065\u0071\u0075\u0065\u0073\u0074\u0020\u0066\u0061\u0069\u006c\u0065\u0064\u0020\u0074\u006f\u0020\u0063\u006f\u006d\u0070\u006c\u0065\u0074\u0065\u002e"}}}}_INSTANCE_1679DbLtZXGU_" >n thSic-usMages a("error",f)}}}})},[raui-io"]);Lray.Po.vided w(window,e_INSTANCE_1679DbLtZXGU_" >onMages aPod th",ction in(b,c){var a=AUI(); th("INSTANCE_1679DbLtZXGU_" :tlet-boRefponhth",ction in(d){_INSTANCE_1679DbLtZXGU_" >n thSic-usMages a("sussed ","\u0059\u006f\u0075\u0072\u0020\u0072\u0065\u0071\u0075\u0065\u0073\u0074\u0020\u0070\u0072\u006f\u0063\u0065\u0073\u0073\u0065\u0064\u0020\u0073\u0075\u0063\u0063\u0065\u0073\u0073\u0066\u0075\u006c\u006c\u0079\u002e");ated an.hashtt#d+a.une("#_INSTANCE_1679DbLtZXGU_" >domly Nd Ece-th").ue ()+"sages a_"+b.sages a#x});if(c){window.ated an.reload()}else{Lray.Po.Plet-bo.refponh("#p_d_56_INSTANCE_1679DbLtZXGU_" >")}},[raui-ed o"]);Lray.Po.vided w(window,e_INSTANCE_1679DbLtZXGU_" >n thSic-usMages a",ction in(b,c){var a=AUI();var d=a.une("#_INSTANCE_1679DbLtZXGU_" >discuon static-usssages ar/);d.removeCs="p("tlet-bo-msg-error");d.removeCs="p("tlet-bo-msg-sussed ");d.addCs="p("tlet-bo-msg-"+b);d.html(c);d.n th()},[raui-ed o"]);Lray.Po.Plet-bo.onLoad({canEditTe="G:false,umn-1"Pod:0,isSic-ic:"end",nd Ece-thdId:"p_d_56_INSTANCE_1679DbLtZXGU_" >",plet-boId:"INSTANCE_1679DbLtZXGU_" ",refponhURL:"\x2fc\x2fpletal\x2frender_plet-bo\x3fp_d=51\x3d51640\x26p_d_56\x3d5NSTANCE_1679DbLtZXGU_" \x26p_d_lrayles, \x3d0\x26p_t_lrayles, \x3d0\x26p_d_te of\x3dnal as\x26p_d_el, \x3dview\x26p_d_cod=51\x3dumn-1"> \x26p_d_cod=pos\x3d0\x26p_d_cod=ad nt\x3d \x26p_d_isoed&rd\x3d \x26curr. IURL\x3d\x252Fults;js"});Lray.Po.Plet-bo.onLoad({canEditTe="G:false,umn-1"Pod:0,isSic-ic:"end",nd Ece-thdId:"p_d_56_103>",plet-boId:""><\x26p_d_lrayles, \x3d0\x26p_t_lrayles, \x3d0\x26p_d_te of\x3dnal as\x26p_d_el, \x3dview\x26p_d_cod=51\x3d\x26p_d_cod=pos\x3d0\x26p_d_cod=ad nt\x3d0\x26p_d_isoed&rd\x3d \x26curr. IURL\x3d\x252Fults;js"});AUI().use("aui-ed o",llray.Po-f an",llray.Po-ts u",llray.Po-no-ico",llray.Po-poller",llray.Po-rng ans",ction in(a){(ction in(){Lray.Po.Rng ans.regencer({aal m bSe of:0,cs="pN" t:"inl\x2elray.Po\x2eplet-bo\x2ernal-co\x2eel, w\x2eJnal-coAcle"> <,cs="pPK:"649"32<,cainer" sId:"zyfa_rng anCainer" st,nd Ece-th:"zyfa_",es o:5,totalEal ied:0,totalSe of:0,t="t:"gesrid,uri:t/c/rng ans/rnge_ral y",ynalSe of:0})})();(ction in(){Lray.Po.Fal .regencer({id:e_INSTANCE_1679DbLtZXGU_" >fn",md neRuled:[]})})();(ction in(){Lray.Po.Util.addIt meT="t();Lray.Po.Plet-bo.ready(ction in(b,c){Lray.Po.Util.addIt meT="t(c)})})();(ction in(){ chaLray.Po.Ms u();var b=Lray.Po.D;f a(var c=1;cptor /dinptor ="/imp:// sId=ers wp;t=1labcatbrT="t=jsp;t=1langus a#x=en_USp;t=1b=6102p;t=1t=1442406432000" e="text-al/javaiptor ">ub>ptor /dinptor e="text-al/javaiptor ">ub>ptor /di/html>