Page 15 - 22-0424
P. 15

The International Journal of the Royal Society of Thailand
                                                                                         Volume XI - 2019



                        Although tree search and rule-based approaches are rather practical and
                understandable for several applications, there are other complex problems in
                digital health and precision medicine which cannot be efficiently solved by these

                methods, for example, locating a cataract from an optical image (Mohammadpour
                et al., 2015) or finding pattern of genes in microarray data. The most efficient
                AI method to cope with these complex problems is a neural network (de Castro
                A et al., 2018). A neural network is a set of neurons connected in the form of
                network. Each neuron is a mathematical function imitating the activity of a
                neuron based on the model proposed by McCulloch and Pitts (McCulloch and
                Pitts 1990).

                        The original McCulloch and Pitts (MP) model tried to answer two
                fundamental questions, namely when will a neuron fire a signal to other connected
                neurons (https://en.wikipedia.org/wiki/Digital_health), and how does a neuron
                learn? Their proposed model is similar to threshold logic (Senanarong et al., 2018).

                        A neuron acts as a classifier by using either a linear or non-linear function
                which makes it very versatile for any complex problems.

                        Many neural network models have been proposed during the past 40 years.
                Each later model improves on the previous models in terms of learning speed,
                classification accuracy, and number of neurons. However, the most current neural
                network models work under the assumption that all training data sets must be
                stored inside the computer memory during the learning process. If new data exist
                to be learned, then the new data must be combined with the previous training
                data and the learning process must be restarted. This circumstance increases the
                complexity of learning time. Furthermore, some models have no plasticity to adjust
                the structure of the neural network by either adding more neurons or eliminating
                some redundant neurons during the learning process. These neural models
                cannot be used in a big data environment where the data keep increasing
                and overflow the computer memory.

                        Recently, some new neural models have been proposed to cope with
                tremendous and streaming data volume (McCulloch and Pitts 1990; Junsawang
                et al., 2019). The models deploy the concept of discard-after-learning to learn
                chunks of streaming data in order to solve the problem of memory overflow and
                to include plasticity within the network. Each incoming chunk of data is
                completely discarded from the learning process after being learned.




                     Chidchanok Lursinsap
                     Somchai Bovornkitti                                                            9



                                                                                                  11/7/2565 BE   13:27
       _22-0424(005-010)2.indd   9                                                                11/7/2565 BE   13:27
       _22-0424(005-010)2.indd   9
   10   11   12   13   14   15   16   17   18   19   20