Selasa, 30 November 2010
Senin, 19 Juli 2010
Jaringan Syaraf Tiruan Backpropagation

Perkembangan JST dengan single layer terhenti pada sekitar tahun 1970-an. Hal ini disebabkan JST dengan single layer (hanya memiliki input layer dan output layer) memiliki kelemahan dalam pengenalan pola. Kelemahan ini bisa diatasi dengan penambahan beberapa hidden layer diantara input layer dan output layer.Backpropagation Standar
Perubahan arsitektur ini mengakibatkan algoritma pelatihan sebelumnya untuk single layer tidak bisa digunakan lagi. Tentu saja dibutuhkan algoritma baru untuk melakukan pelatihan JST dengan beberapa layer.
Penemuan algoritma backpropagation turut memberi andil dalam perkembangan JST apalagi setelah banyak aplikasi yang mampu diselesaikan, membuat JST semakin diminati orang. Seperti halnya pada algoritma pelatihan JST lainnya, backpropagation melatih jaringan sehingga didapatkan jaringan yang memiliki kemampuan untuk mengingat akan pola-pola training (memori) dan memberikan respon yang benar terhadap pola-pola yang belum dikenal sebelumnya (generalisasi).
Arsitektur Backpropagation
Arsitektur standar yang biasa dipakai dalam backpropagation adalah Feed Forward Neural Network (FFNN) Multi Layer Perceptron (MLP). JST ini memiliki p unit neuron (ditambah bias) yang ada di dalam satu atau lebih hidden layer, n buah masukan (ditambah bias) dan terdapat m buah unit keluaran.
Minggu, 18 Juli 2010
Jaringan Syaraf Tiruan Backpropagation

.Perkembangan JST dengan single layer terhenti pada sekitar tahun 1970-an. Hal ini disebabkan JST dengan single layer (hanya memiliki input layer dan output layer) memiliki kelemahan dalam pengenalan pola. Kelemahan ini bisa diatasi dengan penambahan beberapa hidden layer diantara input layer dan output layer.Backpropagation Standar
Perubahan arsitektur ini mengakibatkan algoritma pelatihan sebelumnya untuk single layer tidak bisa digunakan lagi. Tentu saja dibutuhkan algoritma baru untuk melakukan pelatihan JST dengan beberapa layer.
Penemuan algoritma backpropagation turut memberi andil dalam perkembangan JST apalagi setelah banyak aplikasi yang mampu diselesaikan, membuat JST semakin diminati orang. Seperti halnya pada algoritma pelatihan JST lainnya, backpropagation melatih jaringan sehingga didapatkan jaringan yang memiliki kemampuan untuk mengingat akan pola-pola training (memori) dan memberikan respon yang benar terhadap pola-pola yang belum dikenal sebelumnya (generalisasi).
Arsitektur Backpropagation
Arsitektur standar yang biasa dipakai dalam backpropagation adalah Feed Forward Neural Network (FFNN) Multi Layer Perceptron (MLP). JST ini memiliki p unit neuron (ditambah bias) yang ada di dalam satu atau lebih hidden layer, n buah masukan (ditambah bias) dan terdapat m buah unit keluaran
Neural networks: A requirement for intelligent systems
Throughout the years, the computational changes have brought growth to new technologies.Such is the case of artificial neural networks, that over the years, they have given various solutions to the industry.
Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products for society. Such is the case of the implementation of artificial life as well as giving solution to interrogatives that linear systems are not able resolve.
A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. A particular case is for considering which I will cite. During summer of 2006, an intelligent crop protection system was required by the government. This system would protect a crop field from season plagues. The system consisted on a flying vehicle that would inspect crop fields by flying over them.
Now, imagine how difficult this was. Anyone that could understand such a task would say that this project was designated to a multimillionaire enterprise capable of develop such technology. Nevertheless, it wasn’t like that. The selected company was a small group of graduated engineers. Regardless the lack of experience, the team was qualified. The team was divided into 4 sections in which each section was designed to develop specific sub-systems. The leader was an electronics specialist. She developed the electronic system. Another member was a mechanics and hydraulics specialist. He developed the drive system. The third member was a system engineer who developed all software, and the communication system. The last member was designed to develop all related to avionics and artificial intelligence.
Everything was going fine. When time came to put the pieces together, all fitted perfectly until they find out the robot had no knowledge about its task. What happened? The one designated to develop all artificial intelligent forgot to “teach the system”. The solution would be easy; however, training a neural network required additional tools. The engineer designated to develop the intelligent system passed over this inconvenient.
It was an outsider who suggested the best solution: Acquiring neural network software. For an affordable price, the team bought the software, and with its help, they designed and trained the system without a problem.
The story ended satisfactorily, but just in some parts of the design. The drive system was working perfectly as the software and the communication device. The intelligent system was doing its job. Nonetheless, the project was a complete failure. Why? They never taught it how to fly.
Designing a neural network efficiently
By experience, I know it is not necessary to be a programmer nor have deep knowledge about complex neural network algorithms in order to design a neural network. There is a wide range of neural network software out there, and most of them have good quality. My suggestion for those looking for the answer on neural network design is to acquire all required tools. Good software will save you thousands of hours of programming as well as in learning complex algorithms. I have placed a review section in which you will find which software is more convenient for you. However, you will always find all information here to develop your own.
Neural network introduction
* Introduction
* Biological Model
* Mathematical Model
o Activation Functions
* A framework for distributed representation
* Neural Network Topologies
* Training of artifcial neural networks
This site is intended to be a guide on technologies of neural networks, technologies that I believe are an essential basis about what awaits us in the future. The site is divided into 3 sections: The first one contains technical information about the neural networks architectures known, this section is merely theoretical, The second section is set of topics related to neural networks as: artificial intelligence genetic algorithms, DSP's, among others.
And the third section is the site blog where I expose personal projects related to neural networks and artificial intelligence, where the understanding of certain theoretical dilemmas can be understood with the aid of source code programs. The site is constantly updated with new content where new topics are added, this topics are related to artificial intelligence technologies.
Introduction
What is an artificial neural network?
An artificial neural network is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system. Why would be necessary the implementation of artificial neural networks? Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform; even so a software implementation of a neural network can be made with their advantages and disadvantages.
Advantages:
* A neural network can perform tasks that a linear program can not.
* When an element of the neural network fails, it can continue without any problem by their parallel nature.
* A neural network learns and does not need to be reprogrammed.
* It can be implemented in any application.
* It can be implemented without any problem.
Disadvantages:
* The neural network needs training to operate.
* The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.
* Requires high processing time for large neural networks.
Another aspect of the artificial neural networks is that there are different architectures, which consequently requires different types of algorithms, but despite to be an apparently complex system, a neural network is relatively simple.
Artificial neural networks (ANN) are among the newest signal-processing technologies in the engineer's toolbox. The field is highly interdisciplinary, but our approach will restrict the view to the engineering perspective. In engineering, neural networks serve two important functions: as pattern classifiers and as nonlinear adaptive filters. We will provide a brief overview of the theory, learning rules, and applications of the most important neural network models. Definitions and Style of Computation An Artificial Neural Network is an adaptive, most often nonlinear system that learns to perform a function (an input/output map) from data. Adaptive means that the system parameters are changed during operation, normally called the training phase . After the training phase the Artificial Neural Network parameters are fixed and the system is deployed to solve the problem at hand (the testing phase ). The Artificial Neural Network is built with a systematic step-by-step procedure to optimize a performance criterion or to follow some implicit internal constraint, which is commonly referred to as the learning rule . The input/output training data are fundamental in neural network technology, because they convey the necessary information to "discover" the optimal operating point. The nonlinear nature of the neural network processing elements (PEs) provides the system with lots of flexibility to achieve practically any desired input/output map, i.e., some Artificial Neural Networks are universal mappers . There is a style in neural computation that is worth describing.
An input is presented to the neural network and a corresponding desired or target response set at the output (when this is the case the training is called supervised ). An error is composed from the difference between the desired response and the system output. This error information is fed back to the system and adjusts the system parameters in a systematic fashion (the learning rule). The process is repeated until the performance is acceptable. It is clear from this description that the performance hinges heavily on the data. If one does not have data that cover a significant portion of the operating conditions or if they are noisy, then neural network technology is probably not the right solution. On the other hand, if there is plenty of data and the problem is poorly understood to derive an approximate model, then neural network technology is a good choice. This operating procedure should be contrasted with the traditional engineering design, made of exhaustive subsystem specifications and intercommunication protocols. In artificial neural networks, the designer chooses the network topology, the performance function, the learning rule, and the criterion to stop the training phase, but the system automatically adjusts the parameters. So, it is difficult to bring a priori information into the design, and when the system does not work properly it is also hard to incrementally refine the solution. But ANN-based solutions are extremely efficient in terms of development time and resources, and in many difficult problems artificial neural networks provide performance that is difficult to match with other technologies. Denker 10 years ago said that "artificial neural networks are the second best way to implement a solution" motivated by the simplicity of their design and because of their universality, only shadowed by the traditional design obtained by studying the physics of the problem. At present, artificial neural networks are emerging as the technology of choice for many applications, such as pattern recognition, prediction, system identification, and control.
The Biological Model
Artificial neural networks emerged after the introduction of simplified neurons by McCulloch and Pitts in 1943 (McCulloch & Pitts, 1943). These neurons were presented as models of biological neurons and as conceptual components for circuits that could perform computational tasks. The basic model of the neuron is founded upon the functionality of a biological neuron. "Neurons are the basic signaling units of the nervous system" and "each neuron is a discrete cell whose several processes arise from its cell body".
biological neural network
The neuron has four main regions to its structure. The cell body, or soma, has two offshoots from it, the dendrites, and the axon, which end in presynaptic terminals. The cell body is the heart of the cell, containing the nucleus and maintaining protein synthesis. A neuron may have many dendrites, which branch out in a treelike structure, and receive signals from other neurons. A neuron usually only has one axon which grows out from a part of the cell body called the axon hillock. The axon conducts electric signals generated at the axon hillock down its length. These electric signals are called action potentials. The other end of the axon may split into several branches, which end in a presynaptic terminal. Action potentials are the electric signals that neurons use to convey information to the brain. All these signals are identical. Therefore, the brain determines what type of information is being received based on the path that the signal took. The brain analyzes the patterns of signals being sent and from that information it can interpret the type of information being received. Myelin is the fatty tissue that surrounds and insulates the axon. Often short axons do not need this insulation. There are uninsulated parts of the axon. These areas are called Nodes of Ranvier. At these nodes, the signal traveling down the axon is regenerated. This ensures that the signal traveling down the axon travels fast and remains constant (i.e. very short propagation delay and no weakening of the signal). The synapse is the area of contact between two neurons. The neurons do not actually physically touch. They are separated by the synaptic cleft, and electric signals are sent through chemical 13 interaction. The neuron sending the signal is called the presynaptic cell and the neuron receiving the signal is called the postsynaptic cell. The signals are generated by the membrane potential, which is based on the differences in concentration of sodium and potassium ions inside and outside the cell membrane. Neurons can be classified by their number of processes (or appendages), or by their function. If they are classified by the number of processes, they fall into three categories. Unipolar neurons have a single process (dendrites and axon are located on the same stem), and are most common in invertebrates. In bipolar neurons, the dendrite and axon are the neuron's two separate processes. Bipolar neurons have a subclass called pseudo-bipolar neurons, which are used to send sensory information to the spinal cord. Finally, multipolar neurons are most common in mammals. Examples of these neurons are spinal motor neurons, pyramidal cells and Purkinje cells (in the cerebellum). If classified by function, neurons again fall into three separate categories. The first group is sensory, or afferent, neurons, which provide information for perception and motor coordination. The second group provides information (or instructions) to muscles and glands and is therefore called motor neurons. The last group, interneuronal, contains all other neurons and has two subclasses. One group called relay or projection interneurons have long axons and connect different parts of the brain. The other group called local interneurons are only used in local circuits.
The Mathematical Model
When creating a functional model of the biological neuron, there are three basic components of importance. First, the synapses of the neuron are modeled as weights. The strength of the connection between an input and a neuron is noted by the value of the weight. Negative weight values reflect inhibitory connections, while positive values designate excitatory connections [Haykin]. The next two components model the actual activity within the neuron cell. An adder sums up all the inputs modified by their respective weights. This activity is referred to as linear combination. Finally, an activation function controls the amplitude of the output of the neuron. An acceptable range of output is usually between 0 and 1, or -1 and 1.
Mathematically, this process is described in the figure
From this model the interval activity of the neuron can be shown to be:
The output of the neuron, yk, would therefore be the outcome of some activation function on the value of vk.
Activation functions
As mentioned previously, the activation function acts as a squashing function, such that the output of a neuron in a neural network is between certain values (usually 0 and 1, or -1 and 1). In general, there are three types of activation functions, denoted by Φ(.) . First, there is the Threshold Function which takes on a value of 0 if the summed input is less than a certain threshold value (v), and the value 1 if the summed input is greater than or equal to the threshold value.
Secondly, there is the Piecewise-Linear function. This function again can take on the values of 0 or 1, but can also take on values between that depending on the amplification factor in a certain region of linear operation.
Thirdly, there is the sigmoid function. This function can range between 0 and 1, but it is also sometimes useful to use the -1 to 1 range. An example of the sigmoid function is the hyperbolic tangent function.
The artifcial neural networks which we describe are all variations on the parallel distributed processing (PDP) idea. The architecture of each neural network is based on very similar building blocks which perform the processing. In this chapter we first discuss these processing units and discuss diferent neural network topologies. Learning strategies as a basis for an adaptive system
A framework for distributed representation
An artifcial neural network consists of a pool of simple processing units which communicate by sending signals to each other over a large number of weighted connections. A set of major aspects of a parallel distributed model can be distinguished :
* a set of processing units ('neurons,' 'cells');
* a state of activation yk for every unit, which equivalent to the output of the unit;
* connections between the units. Generally each connection is defined by a weight wjk which determines the effect which the signal of unit j has on unit k;
* a propagation rule, which determines the effective input sk of a unit from its external inputs;
* an activation function Fk, which determines the new level of activation based on the efective input sk(t) and the current activation yk(t) (i.e., the update);
* an external input (aka bias, offset) øk for each unit;
* a method for information gathering (the learning rule);
* an environment within which the system must operate, providing input signals and|if necessary|error signals.
Processing units
Each unit performs a relatively simple job: receive input from neighbours or external sources and use this to compute an output signal which is propagated to other units. Apart from this processing, a second task is the adjustment of the weights. The system is inherently parallel in the sense that many units can carry out their computations at the same time. Within neural systems it is useful to distinguish three types of units: input units (indicated by an index i) which receive data from outside the neural network, output units (indicated by an index o) which send data out of the neural network, and hidden units (indicated by an index h) whose input and output signals remain within the neural network. During operation, units can be updated either synchronously or asynchronously. With synchronous updating, all units update their activation simultaneously; with asynchronous updating, each unit has a (usually fixed) probability of updating its activation at a time t, and usually only one unit will be able to do this at a time. In some cases the latter model has some advantages.
Neural Network topologies
In the previous section we discussed the properties of the basic processing unit in an artificial neural network. This section focuses on the pattern of connections between the units and the propagation of data. As for this pattern of connections, the main distinction we can make is between:
* Feed-forward neural networks, where the data ow from input to output units is strictly feedforward. The data processing can extend over multiple (layers of) units, but no feedback connections are present, that is, connections extending from outputs of units to inputs of units in the same layer or previous layers.
* Recurrent neural networks that do contain feedback connections. Contrary to feed-forward networks, the dynamical properties of the network are important. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change anymore. In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behaviour constitutes the output of the neural network (Pearlmutter, 1990).
Classical examples of feed-forward neural networks are the Perceptron and Adaline. Examples of recurrent networks have been presented by Anderson
(Anderson, 1977), Kohonen (Kohonen, 1977), and Hopfield (Hopfield, 1982) .
Training of artifcial neural networks
A neural network has to be configured such that the application of a set of inputs produces (either 'direct' or via a relaxation process) the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to 'train' the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule.
We can categorise the learning situations in two distinct sorts. These are:
* Supervised learning or Associative learning in which the network is trained by providing it with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the neural network (self-supervised).
* Unsupervised learning or Self-organisation in which an (output) unit is trained to respond to clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm, there is no a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli.
* Reinforcement Learning This type of learning may be considered as an intermediate form of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The selforganizing neural learning may be categorized under this type of learning.
Modifying patterns of connectivity of Neural Networks
Both learning paradigms supervised learning and unsupervised learning result in an adjustment of the weights of the connections between units, according to some modification rule. Virtually all learning rules for models of this type can be considered as a variant of the Hebbian learning rule suggested by Hebb in his classic book Organization of Behaviour (1949) (Hebb, 1949). The basic idea is that if two units j and k are active simultaneously, their interconnection must be strengthened. If j receives input from k, the simplest version of Hebbian learning prescribes to modify the weight wjk with
Neural network training formula
where ϒ is a positive constant of proportionality representing the learning rate. Another common rule uses not the actual activation of unit k but the difference between the actual and desired activation for adjusting the weights:
in which dk is the desired activation provided by a teacher. This is often called the Widrow-Hoff rule or the delta rule, and will be discussed in the next chapter. Many variants (often very exotic ones) have been published the last few years.
Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products for society. Such is the case of the implementation of artificial life as well as giving solution to interrogatives that linear systems are not able resolve.
A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. A particular case is for considering which I will cite. During summer of 2006, an intelligent crop protection system was required by the government. This system would protect a crop field from season plagues. The system consisted on a flying vehicle that would inspect crop fields by flying over them.
Now, imagine how difficult this was. Anyone that could understand such a task would say that this project was designated to a multimillionaire enterprise capable of develop such technology. Nevertheless, it wasn’t like that. The selected company was a small group of graduated engineers. Regardless the lack of experience, the team was qualified. The team was divided into 4 sections in which each section was designed to develop specific sub-systems. The leader was an electronics specialist. She developed the electronic system. Another member was a mechanics and hydraulics specialist. He developed the drive system. The third member was a system engineer who developed all software, and the communication system. The last member was designed to develop all related to avionics and artificial intelligence.
Everything was going fine. When time came to put the pieces together, all fitted perfectly until they find out the robot had no knowledge about its task. What happened? The one designated to develop all artificial intelligent forgot to “teach the system”. The solution would be easy; however, training a neural network required additional tools. The engineer designated to develop the intelligent system passed over this inconvenient.
It was an outsider who suggested the best solution: Acquiring neural network software. For an affordable price, the team bought the software, and with its help, they designed and trained the system without a problem.
The story ended satisfactorily, but just in some parts of the design. The drive system was working perfectly as the software and the communication device. The intelligent system was doing its job. Nonetheless, the project was a complete failure. Why? They never taught it how to fly.
Designing a neural network efficiently
By experience, I know it is not necessary to be a programmer nor have deep knowledge about complex neural network algorithms in order to design a neural network. There is a wide range of neural network software out there, and most of them have good quality. My suggestion for those looking for the answer on neural network design is to acquire all required tools. Good software will save you thousands of hours of programming as well as in learning complex algorithms. I have placed a review section in which you will find which software is more convenient for you. However, you will always find all information here to develop your own.
Neural network introduction
* Introduction
* Biological Model
* Mathematical Model
o Activation Functions
* A framework for distributed representation
* Neural Network Topologies
* Training of artifcial neural networks
This site is intended to be a guide on technologies of neural networks, technologies that I believe are an essential basis about what awaits us in the future. The site is divided into 3 sections: The first one contains technical information about the neural networks architectures known, this section is merely theoretical, The second section is set of topics related to neural networks as: artificial intelligence genetic algorithms, DSP's, among others.
And the third section is the site blog where I expose personal projects related to neural networks and artificial intelligence, where the understanding of certain theoretical dilemmas can be understood with the aid of source code programs. The site is constantly updated with new content where new topics are added, this topics are related to artificial intelligence technologies.
Introduction
What is an artificial neural network?
An artificial neural network is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system. Why would be necessary the implementation of artificial neural networks? Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform; even so a software implementation of a neural network can be made with their advantages and disadvantages.
Advantages:
* A neural network can perform tasks that a linear program can not.
* When an element of the neural network fails, it can continue without any problem by their parallel nature.
* A neural network learns and does not need to be reprogrammed.
* It can be implemented in any application.
* It can be implemented without any problem.
Disadvantages:
* The neural network needs training to operate.
* The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.
* Requires high processing time for large neural networks.
Another aspect of the artificial neural networks is that there are different architectures, which consequently requires different types of algorithms, but despite to be an apparently complex system, a neural network is relatively simple.
Artificial neural networks (ANN) are among the newest signal-processing technologies in the engineer's toolbox. The field is highly interdisciplinary, but our approach will restrict the view to the engineering perspective. In engineering, neural networks serve two important functions: as pattern classifiers and as nonlinear adaptive filters. We will provide a brief overview of the theory, learning rules, and applications of the most important neural network models. Definitions and Style of Computation An Artificial Neural Network is an adaptive, most often nonlinear system that learns to perform a function (an input/output map) from data. Adaptive means that the system parameters are changed during operation, normally called the training phase . After the training phase the Artificial Neural Network parameters are fixed and the system is deployed to solve the problem at hand (the testing phase ). The Artificial Neural Network is built with a systematic step-by-step procedure to optimize a performance criterion or to follow some implicit internal constraint, which is commonly referred to as the learning rule . The input/output training data are fundamental in neural network technology, because they convey the necessary information to "discover" the optimal operating point. The nonlinear nature of the neural network processing elements (PEs) provides the system with lots of flexibility to achieve practically any desired input/output map, i.e., some Artificial Neural Networks are universal mappers . There is a style in neural computation that is worth describing.
An input is presented to the neural network and a corresponding desired or target response set at the output (when this is the case the training is called supervised ). An error is composed from the difference between the desired response and the system output. This error information is fed back to the system and adjusts the system parameters in a systematic fashion (the learning rule). The process is repeated until the performance is acceptable. It is clear from this description that the performance hinges heavily on the data. If one does not have data that cover a significant portion of the operating conditions or if they are noisy, then neural network technology is probably not the right solution. On the other hand, if there is plenty of data and the problem is poorly understood to derive an approximate model, then neural network technology is a good choice. This operating procedure should be contrasted with the traditional engineering design, made of exhaustive subsystem specifications and intercommunication protocols. In artificial neural networks, the designer chooses the network topology, the performance function, the learning rule, and the criterion to stop the training phase, but the system automatically adjusts the parameters. So, it is difficult to bring a priori information into the design, and when the system does not work properly it is also hard to incrementally refine the solution. But ANN-based solutions are extremely efficient in terms of development time and resources, and in many difficult problems artificial neural networks provide performance that is difficult to match with other technologies. Denker 10 years ago said that "artificial neural networks are the second best way to implement a solution" motivated by the simplicity of their design and because of their universality, only shadowed by the traditional design obtained by studying the physics of the problem. At present, artificial neural networks are emerging as the technology of choice for many applications, such as pattern recognition, prediction, system identification, and control.
The Biological Model
Artificial neural networks emerged after the introduction of simplified neurons by McCulloch and Pitts in 1943 (McCulloch & Pitts, 1943). These neurons were presented as models of biological neurons and as conceptual components for circuits that could perform computational tasks. The basic model of the neuron is founded upon the functionality of a biological neuron. "Neurons are the basic signaling units of the nervous system" and "each neuron is a discrete cell whose several processes arise from its cell body".
biological neural network
The neuron has four main regions to its structure. The cell body, or soma, has two offshoots from it, the dendrites, and the axon, which end in presynaptic terminals. The cell body is the heart of the cell, containing the nucleus and maintaining protein synthesis. A neuron may have many dendrites, which branch out in a treelike structure, and receive signals from other neurons. A neuron usually only has one axon which grows out from a part of the cell body called the axon hillock. The axon conducts electric signals generated at the axon hillock down its length. These electric signals are called action potentials. The other end of the axon may split into several branches, which end in a presynaptic terminal. Action potentials are the electric signals that neurons use to convey information to the brain. All these signals are identical. Therefore, the brain determines what type of information is being received based on the path that the signal took. The brain analyzes the patterns of signals being sent and from that information it can interpret the type of information being received. Myelin is the fatty tissue that surrounds and insulates the axon. Often short axons do not need this insulation. There are uninsulated parts of the axon. These areas are called Nodes of Ranvier. At these nodes, the signal traveling down the axon is regenerated. This ensures that the signal traveling down the axon travels fast and remains constant (i.e. very short propagation delay and no weakening of the signal). The synapse is the area of contact between two neurons. The neurons do not actually physically touch. They are separated by the synaptic cleft, and electric signals are sent through chemical 13 interaction. The neuron sending the signal is called the presynaptic cell and the neuron receiving the signal is called the postsynaptic cell. The signals are generated by the membrane potential, which is based on the differences in concentration of sodium and potassium ions inside and outside the cell membrane. Neurons can be classified by their number of processes (or appendages), or by their function. If they are classified by the number of processes, they fall into three categories. Unipolar neurons have a single process (dendrites and axon are located on the same stem), and are most common in invertebrates. In bipolar neurons, the dendrite and axon are the neuron's two separate processes. Bipolar neurons have a subclass called pseudo-bipolar neurons, which are used to send sensory information to the spinal cord. Finally, multipolar neurons are most common in mammals. Examples of these neurons are spinal motor neurons, pyramidal cells and Purkinje cells (in the cerebellum). If classified by function, neurons again fall into three separate categories. The first group is sensory, or afferent, neurons, which provide information for perception and motor coordination. The second group provides information (or instructions) to muscles and glands and is therefore called motor neurons. The last group, interneuronal, contains all other neurons and has two subclasses. One group called relay or projection interneurons have long axons and connect different parts of the brain. The other group called local interneurons are only used in local circuits.
The Mathematical Model
When creating a functional model of the biological neuron, there are three basic components of importance. First, the synapses of the neuron are modeled as weights. The strength of the connection between an input and a neuron is noted by the value of the weight. Negative weight values reflect inhibitory connections, while positive values designate excitatory connections [Haykin]. The next two components model the actual activity within the neuron cell. An adder sums up all the inputs modified by their respective weights. This activity is referred to as linear combination. Finally, an activation function controls the amplitude of the output of the neuron. An acceptable range of output is usually between 0 and 1, or -1 and 1.
Mathematically, this process is described in the figure
From this model the interval activity of the neuron can be shown to be:
The output of the neuron, yk, would therefore be the outcome of some activation function on the value of vk.
Activation functions
As mentioned previously, the activation function acts as a squashing function, such that the output of a neuron in a neural network is between certain values (usually 0 and 1, or -1 and 1). In general, there are three types of activation functions, denoted by Φ(.) . First, there is the Threshold Function which takes on a value of 0 if the summed input is less than a certain threshold value (v), and the value 1 if the summed input is greater than or equal to the threshold value.
Secondly, there is the Piecewise-Linear function. This function again can take on the values of 0 or 1, but can also take on values between that depending on the amplification factor in a certain region of linear operation.
Thirdly, there is the sigmoid function. This function can range between 0 and 1, but it is also sometimes useful to use the -1 to 1 range. An example of the sigmoid function is the hyperbolic tangent function.
The artifcial neural networks which we describe are all variations on the parallel distributed processing (PDP) idea. The architecture of each neural network is based on very similar building blocks which perform the processing. In this chapter we first discuss these processing units and discuss diferent neural network topologies. Learning strategies as a basis for an adaptive system
A framework for distributed representation
An artifcial neural network consists of a pool of simple processing units which communicate by sending signals to each other over a large number of weighted connections. A set of major aspects of a parallel distributed model can be distinguished :
* a set of processing units ('neurons,' 'cells');
* a state of activation yk for every unit, which equivalent to the output of the unit;
* connections between the units. Generally each connection is defined by a weight wjk which determines the effect which the signal of unit j has on unit k;
* a propagation rule, which determines the effective input sk of a unit from its external inputs;
* an activation function Fk, which determines the new level of activation based on the efective input sk(t) and the current activation yk(t) (i.e., the update);
* an external input (aka bias, offset) øk for each unit;
* a method for information gathering (the learning rule);
* an environment within which the system must operate, providing input signals and|if necessary|error signals.
Processing units
Each unit performs a relatively simple job: receive input from neighbours or external sources and use this to compute an output signal which is propagated to other units. Apart from this processing, a second task is the adjustment of the weights. The system is inherently parallel in the sense that many units can carry out their computations at the same time. Within neural systems it is useful to distinguish three types of units: input units (indicated by an index i) which receive data from outside the neural network, output units (indicated by an index o) which send data out of the neural network, and hidden units (indicated by an index h) whose input and output signals remain within the neural network. During operation, units can be updated either synchronously or asynchronously. With synchronous updating, all units update their activation simultaneously; with asynchronous updating, each unit has a (usually fixed) probability of updating its activation at a time t, and usually only one unit will be able to do this at a time. In some cases the latter model has some advantages.
Neural Network topologies
In the previous section we discussed the properties of the basic processing unit in an artificial neural network. This section focuses on the pattern of connections between the units and the propagation of data. As for this pattern of connections, the main distinction we can make is between:
* Feed-forward neural networks, where the data ow from input to output units is strictly feedforward. The data processing can extend over multiple (layers of) units, but no feedback connections are present, that is, connections extending from outputs of units to inputs of units in the same layer or previous layers.
* Recurrent neural networks that do contain feedback connections. Contrary to feed-forward networks, the dynamical properties of the network are important. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change anymore. In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behaviour constitutes the output of the neural network (Pearlmutter, 1990).
Classical examples of feed-forward neural networks are the Perceptron and Adaline. Examples of recurrent networks have been presented by Anderson
(Anderson, 1977), Kohonen (Kohonen, 1977), and Hopfield (Hopfield, 1982) .
Training of artifcial neural networks
A neural network has to be configured such that the application of a set of inputs produces (either 'direct' or via a relaxation process) the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to 'train' the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule.
We can categorise the learning situations in two distinct sorts. These are:
* Supervised learning or Associative learning in which the network is trained by providing it with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the neural network (self-supervised).
* Unsupervised learning or Self-organisation in which an (output) unit is trained to respond to clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm, there is no a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli.
* Reinforcement Learning This type of learning may be considered as an intermediate form of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The selforganizing neural learning may be categorized under this type of learning.
Modifying patterns of connectivity of Neural Networks
Both learning paradigms supervised learning and unsupervised learning result in an adjustment of the weights of the connections between units, according to some modification rule. Virtually all learning rules for models of this type can be considered as a variant of the Hebbian learning rule suggested by Hebb in his classic book Organization of Behaviour (1949) (Hebb, 1949). The basic idea is that if two units j and k are active simultaneously, their interconnection must be strengthened. If j receives input from k, the simplest version of Hebbian learning prescribes to modify the weight wjk with
Neural network training formula
where ϒ is a positive constant of proportionality representing the learning rate. Another common rule uses not the actual activation of unit k but the difference between the actual and desired activation for adjusting the weights:
in which dk is the desired activation provided by a teacher. This is often called the Widrow-Hoff rule or the delta rule, and will be discussed in the next chapter. Many variants (often very exotic ones) have been published the last few years.
Jumat, 16 Juli 2010
Video NC1253HZ6 Mentalis Deddy Corbuzier Sangat Mencurigakan
Deddy kemudian memberikan sebuah kode NC1253HZ6 melalui akunnya di Twitter, sebagai petunjuk atau kunci atas teka-teki siapa juara di Afrika Selatan. Tak sedikit di antara netter Indonesia yang berupaya membongkar apa yang tersembunyi di balik kode tersebut. Salah satu versi terlaris dibuat oleh seorang anggota komunitas Kaskus:
N = North = Utara yang berarti letak negara di utara.
C = Christianity = negara dengan mayoritas penduduk beragama Nasrani.
1253 = angka yg menunjukkan letak negaranya. Karena letaknya di utara, disesuaikan dengan letaknya menurut garis lintang utara = 52°-31° lintang utara.
H = Horst Köhler yang merupakan presiden resmi terakhir yang dipilih melalui pemilu.
Z = Zurücktreten = yang berarti mengundurkan diri.
6 = enam tahun.
Rincian di atas menunjukkan Jerman bakal juara, karena letak negaranya di utara, 52°-31° lintang utara, 63% penduduk Jerman adalah Kristen, Horst Köhler dilantik tanggal 1 Juli 2004 pada pemilu 23 Mei 2009 dan terpilih kembali menjadi presiden, kemudian mengundurkan diri setelah menjabat presiden selama hampir enam tahun pada tanggal 31 Mei 2010. Namun pada pertandingan semi-final Jerman tersingkir, sehingga berbagai macam versi lainnya merebak.
Akhirnya apa yang dimaksud dengan kode NC1253HZ6 terungkap tadi pagi, usai kemenangan Spanyol atas Belanda di final Piala Dunia 2010. Deddy tampil secara langsung di layar televisi dengan upacara pembukaan kotak. Terdapat selembar kertas di dalamnya dengan tulisan skor "Belanda 0-2 Spanyol", tapi angka 2 dalam keadaan tercoret, digantikan oleh angka 1. Selanjutnya, Deddy membeberkan arti NC1253HZ6. Ia membuka papan yang terlipat dua: sisi terdepan menunjukkan "NC1253HZ6", kemudian ia membalikkan sisi lainnya yang tertulis "www.youtube.com/". Ternyata NC1253HZ6 adalah sebuah kanal di YouTube!
Di kanal itu, terdapat video Deddy yang mengatakan: "Sekarang tanggal 16 dinihari, dan saya memprediksikan negara yang akan memenangkan FIFA World Cup 2010...adalah Spanyol. Sekali lagi negara yang akan memenangkan FIFA World Cup 2010 adalah Spanyol, dan video ini akan saya upload ke YouTube secepatnya setelah rekaman ini selesai saya buat."
Dan benar, Deddy secepatnya mengunggah video tersebut, sehingga tercantum 17 Juni 2010 sebagai tanggal naiknya video.
Mari kita tinjau video NC1253HZ6 ini secara tuntas. Perhatikan:
Halaman http://www.youtube.com/NC1253HZ6 adalah sebuah kanal (atau channel) di YouTube.
NC1253HZ6 adalah nama pengguna (user), dan bukan tautan atau sebuah link utama video.
Tautan video yang sebenarnya adalah
http://www.youtube.com/watch?v=6WYnOQyWQe4
Hanya ada satu video di dalam kanal NC1253HZ6. Judul videonya sendiri adalah NC1253HZ6.
Empat poin di atas menimbulkan kecurigaan, karena:
Kalau Deddy ingin memberikan sebuah clue, seharusnya yang diberikan adalah 6WYnOQyWQe4, bukan NC1253HZ6. Otomatis, pemberian NC1253HZ6 adalah sebuah penipuan.
Di dalam video ini, skornya tidak disebutkan. Otomatis, isi kotak dengan tulisan "Belanda 0-2 1 Spanyol" patut dipertanyakan. Kalau memang yakin, sudah selazimnya Deddy menyebutkan skor di dalam video ini.
Tanggal upload video bisa diubah.
Sebuah kanal bisa menampung jumlah video sebanyak-banyaknya dan bisa dihapus di kemudian hari.
Setiap kanal terdiri dari "Recent Activity" atau catatan aktivitas terbaru yang bisa dihapus.
Semua video yang diunggah si pengguna bisa diatur supaya tidak terlihat oleh umum. Terdapat tiga pilihan penyiaran:
Public (umum): Semua orang bisa melihat videonya.
Unlisted (tidak terdaftar): Hanya orang yang mempunyai link yang bisa melihat videonya.
Private (pribadi): Maksimal 25 orang yang bisa melihat videonya, dan orang-orang yang mendapatkan akses ditentukan oleh user sendiri.
THE VERDICT - Kesimpulannya:
Hoax gan, hoax to the max!
Kode NC1253HZ6 tidak ada artinya atau sama sekali tidak relevan dengan sebuah tautan video di YouTube.
Deddy Corbuzier dicurigai telah melakukan pembohongan publik.
Dugaan kecurigaan:
Deddy mengunggah 32 video pada 17 Juni 2010 dengan ucapan pada masing-masing video:
"Saya memprediksikan...Spanyol..."
"Saya memprediksikan...Brasil..."
"Saya memprediksikan...Honduras..."
dan sebagainya.
Kemudian 31 video yang salah prediksi dihapus sebelum pengumuman tadi pagi. Skenario kedua adalah satu video ditayangkan dengan manipulasi tanggal.
Video NC1253HZ6 diatur supaya tidak diketahui publik sebelum pengumuman tadi pagi.
Terakhir, Deddy Corbuzier tidak lebih hebat dari Paul si Gurita, yang mengungkapkan juara Piala Dunia 2010 sebelum pertandingan final dimulai.
SUMBER : http://www.goal.com/id-ID/news/1369/piala-dunia/2010/07/12/2021166/analisis-video-nc1253hz6-mentalis-deddy-corbuzier-sangat
N = North = Utara yang berarti letak negara di utara.
C = Christianity = negara dengan mayoritas penduduk beragama Nasrani.
1253 = angka yg menunjukkan letak negaranya. Karena letaknya di utara, disesuaikan dengan letaknya menurut garis lintang utara = 52°-31° lintang utara.
H = Horst Köhler yang merupakan presiden resmi terakhir yang dipilih melalui pemilu.
Z = Zurücktreten = yang berarti mengundurkan diri.
6 = enam tahun.
Rincian di atas menunjukkan Jerman bakal juara, karena letak negaranya di utara, 52°-31° lintang utara, 63% penduduk Jerman adalah Kristen, Horst Köhler dilantik tanggal 1 Juli 2004 pada pemilu 23 Mei 2009 dan terpilih kembali menjadi presiden, kemudian mengundurkan diri setelah menjabat presiden selama hampir enam tahun pada tanggal 31 Mei 2010. Namun pada pertandingan semi-final Jerman tersingkir, sehingga berbagai macam versi lainnya merebak.
Akhirnya apa yang dimaksud dengan kode NC1253HZ6 terungkap tadi pagi, usai kemenangan Spanyol atas Belanda di final Piala Dunia 2010. Deddy tampil secara langsung di layar televisi dengan upacara pembukaan kotak. Terdapat selembar kertas di dalamnya dengan tulisan skor "Belanda 0-2 Spanyol", tapi angka 2 dalam keadaan tercoret, digantikan oleh angka 1. Selanjutnya, Deddy membeberkan arti NC1253HZ6. Ia membuka papan yang terlipat dua: sisi terdepan menunjukkan "NC1253HZ6", kemudian ia membalikkan sisi lainnya yang tertulis "www.youtube.com/". Ternyata NC1253HZ6 adalah sebuah kanal di YouTube!
Di kanal itu, terdapat video Deddy yang mengatakan: "Sekarang tanggal 16 dinihari, dan saya memprediksikan negara yang akan memenangkan FIFA World Cup 2010...adalah Spanyol. Sekali lagi negara yang akan memenangkan FIFA World Cup 2010 adalah Spanyol, dan video ini akan saya upload ke YouTube secepatnya setelah rekaman ini selesai saya buat."
Dan benar, Deddy secepatnya mengunggah video tersebut, sehingga tercantum 17 Juni 2010 sebagai tanggal naiknya video.
Mari kita tinjau video NC1253HZ6 ini secara tuntas. Perhatikan:
Halaman http://www.youtube.com/NC1253HZ6 adalah sebuah kanal (atau channel) di YouTube.
NC1253HZ6 adalah nama pengguna (user), dan bukan tautan atau sebuah link utama video.
Tautan video yang sebenarnya adalah
http://www.youtube.com/watch?v=6WYnOQyWQe4
Hanya ada satu video di dalam kanal NC1253HZ6. Judul videonya sendiri adalah NC1253HZ6.
Empat poin di atas menimbulkan kecurigaan, karena:
Kalau Deddy ingin memberikan sebuah clue, seharusnya yang diberikan adalah 6WYnOQyWQe4, bukan NC1253HZ6. Otomatis, pemberian NC1253HZ6 adalah sebuah penipuan.
Di dalam video ini, skornya tidak disebutkan. Otomatis, isi kotak dengan tulisan "Belanda 0-2 1 Spanyol" patut dipertanyakan. Kalau memang yakin, sudah selazimnya Deddy menyebutkan skor di dalam video ini.
Tanggal upload video bisa diubah.
Sebuah kanal bisa menampung jumlah video sebanyak-banyaknya dan bisa dihapus di kemudian hari.
Setiap kanal terdiri dari "Recent Activity" atau catatan aktivitas terbaru yang bisa dihapus.
Semua video yang diunggah si pengguna bisa diatur supaya tidak terlihat oleh umum. Terdapat tiga pilihan penyiaran:
Public (umum): Semua orang bisa melihat videonya.
Unlisted (tidak terdaftar): Hanya orang yang mempunyai link yang bisa melihat videonya.
Private (pribadi): Maksimal 25 orang yang bisa melihat videonya, dan orang-orang yang mendapatkan akses ditentukan oleh user sendiri.
THE VERDICT - Kesimpulannya:
Hoax gan, hoax to the max!
Kode NC1253HZ6 tidak ada artinya atau sama sekali tidak relevan dengan sebuah tautan video di YouTube.
Deddy Corbuzier dicurigai telah melakukan pembohongan publik.
Dugaan kecurigaan:
Deddy mengunggah 32 video pada 17 Juni 2010 dengan ucapan pada masing-masing video:
"Saya memprediksikan...Spanyol..."
"Saya memprediksikan...Brasil..."
"Saya memprediksikan...Honduras..."
dan sebagainya.
Kemudian 31 video yang salah prediksi dihapus sebelum pengumuman tadi pagi. Skenario kedua adalah satu video ditayangkan dengan manipulasi tanggal.
Video NC1253HZ6 diatur supaya tidak diketahui publik sebelum pengumuman tadi pagi.
Terakhir, Deddy Corbuzier tidak lebih hebat dari Paul si Gurita, yang mengungkapkan juara Piala Dunia 2010 sebelum pertandingan final dimulai.
SUMBER : http://www.goal.com/id-ID/news/1369/piala-dunia/2010/07/12/2021166/analisis-video-nc1253hz6-mentalis-deddy-corbuzier-sangat
KaraKter berdASArkan bulan lahIR!!!....
Agak pendiam kecuali sedang marah atau senang.
Sangat mudah melihat kelemahan orang lain dan suka mengkritik.
Rajin dan setiap yang dibuat selalu menghasilkan keuntungan.
Suka berbenah atau bersih-bersih dan hal-hal yang serba teratur.
Bersifat sensitif, berfikiran mendalam.
Pandai mengambil hati orang lain.
Mudah mendisiplinkan diri sendiri.
Bersikap romantik tetapi tidak pandai memamerkannya
Cukup sayang pada anak-anak.
Suka berdiam di rumah.
Setia pada segala-galanya.
Perlu belajar untuk hidup bersosialisasi.
Mempunyai rasa cemburu yang sangat tinggi.
FEBRUARI
Berpikiran abstrak.
Inteligent, bijak dan jenius.
Memiliki kepribadian yang mudah berubah.
Mudah menawan hati orang lain.
Agak pendiam, pemalu dan rendah diri.
Jujur dan setia pada segalanya.
Keras hati untuk mencapai tujuan.
Tidak suka dikekang.
Mudah memamerkan dan memperlihatkan amarahnya.
Suka berkawan tapi kurang memamerkannya.
Sangat berani dan suka memberontak.
Bercita-cita tinggi dan suka berangan-angan
Optimis untuk merealisasikan impiannya.
Suka hiburan dan suka akan benda yang bersifat seni.
Berkecenderungan pada benda yang tahyul.
Romantis di dalam tidak diluar.
MARET
Berkepribadian yang menarik dan menawan.
Sangat pemalu dan pemendam rasa.
Sangat baik, jujur, pemurah dan mudah simpati.
Suka pada kedamaian.
Sangat peka kepada orang lain.
Tidak cepat marah dan sangat baik hati.
Tahu membalas dan mengenang budi orang.
Pemerhatian dan penilaian yang sangat tajam.
Kecenderungan untuk mendendam jika tidak dikontrol.
Suka berangan-angan.
Suka melancong.
Sangat manja dan suka diberi perhatian yang sangat tinggi.
Suka dengan hiasan rumah tangga.
Punya bakat seni dalam bidang musik.
Kecenderungan kepada benda yang istimewa dan bagus.
Terlalu moody.
APRIL
Sangat aktif dan dinamik.
Cepat bertindak buat keputusan tetapi cepat menyesal.
Sangat menarik dan pandai memanjakan diri.
Punya daya mental yang sangat kuat.
Suka diberi perhatian.
Diplomatik (pandai membujuk, berkawan & menyelesaikan masalah).
Sangat berani dan tidak ada perasaan takut.
Suka petualangan, pengasih, penyayang, sopan santun dan pemurah.
Kecenderungan bersifat dendam.
Kuat daya ingatan.
Gerak hati yang sangat kuat.
Pandai mendorong diri sendiri dan memotivasikan orang lain.
Sangat cemburu dan terlalu cemburu.
MEI
Kuat semangat dan bermotivasi tinggi.
Pemikiran yang tajam.
Mudah marah apabila tidak dikontrol.
Pandai menarik hati orang lain dan menarik perhatian.
Perasaan yang amat mendalam.
Cantik dari segi mental dan fisik.
Tetap pendirian tetapi mudah dipengaruhi oleh orang lain.
Mudah dibujuk.
Bersikap sistematik (otak kiri).
Suka berangan-angan.
Kuat daya firasat, memahami apa yang terlintas di hati orang lain tanpa diberitahu.
Daya khayal yang tinggi.
Pandai berdebat.
Suka sastra,seni dan musik serta melancong.
Tidak berapa suka duduk atau diam di rumah.
Rajin dan bersemangat tinggi.
Agak boros.
JUNI
Berfikiran jauh dan berwawasan.
Mudah digunakan atau dimanfaatkan orang karena sikap baik.
Berperangai lemah lembut.
Mudah berubah sikap, perangai, idea dan mood.
Idea yang terlalu banyak di kepala.
Bersikap sensitif.
Otaknya aktif (senantiasa berfikir).
Sukar melakukan sesuatu dengan segera.
Bersikap suka menunda-nunda.
Bersikap terlalu memilih dan selalu mau yang terbaik.
Cepat marah dan cepat sejuk.
Suka berbicara dan berdebat.
Suka membuat lawakan atau lelucon dan bergurau.
Otaknya cerdas berangan-angan.
Mudah dan pandai berkawan.
Orang yang sangat tertib.
Pandai memamerkan sikap.
Gampang berkecil hati.
Suka berbenah atau bersih-bersih
Cepat merasa bosan.
Sikap terlalu memilih dan cerewet.
Lambat sembuh apabila hatinya terluka.
JULI
Sangat senang apabila didampingi.
Banyak berahasia dan sukar dimengerti, terutama laki-laki.
Tak suka menyusahkan orang lain tapi tidak marah apabila disusahkan.
Mudah dibujuk.
Sangat menjaga hati atau perasaan orang lain.
Sangat ramah.
Berjiwa sentimental.
Mudah memaafkan tapi sukar melupakan.
Membimbing secara fisik dan mental.
Sangat peka, pengasih serta penyayang.
Daya simpati yang tinggi.
Pemerhatian yang tajam.
Suka menilai orang lain.
Mudah dan rajin belajar.
Suka mengenang peristiwa atau kawan lama.
Suka berdiam diri.
Suka duduk atau diam di rumah.
Suka menunggu kawan tapi tidak mencari kawan.
Tidak agresif kecuali terpaksa.
Minta disayangi.
Mudah terluka hati tapi lambat untuk pulih.
Rajin dalam bekerja.
AGUSTUS
Suka bergurau.
Sopan santun dan perhatian terhadap orang lain.
Berani dan tidak mengenal kata takut.
Orangnya agak tegas dan bersikap kepemimpinan.
Pandai membujuk orang lain.
Terlalu pemurah dan bersikap ego.
Nilai harga diri yang sangat tinggi.
Haus akan pujian.
Semangat juang yang luar biasa.
Cepat marah dan mudah mengamuk.
Mudah marah apabila perkataannya dilawan.
Sangat cemburu.
Cepat berpikir.
Suka memimpin dan dipimpin.
Sifat suka berangan.
Romantik, pengasih, penyayang.
Suka mencari kawan.
SEPTEMBER
Sangat sopan santun.
Sangat cermat, teliti dan teratur.
Suka menegur kesalahan orang lain dan mengkritik.
Pendiam tapi pandai dalam bercakap-cakap.
Sikap sangat cool, sangat baik dan mudah simpati.
Kerja yang dilakukan sangat sempurna.
Sangat sensitif tetapi tidak diketahui.
Otak bijak dan mudah belajar.
Kontrol diri untuk tidak terlalu mengkritik.
Pandai mendorong diri sendiri.
Suka akan hiburan dan melancong.
Kurang menunjukan perasaannya.
Luka hatinya sangat lama disimpan.
Terlalu memilih pasangan.
Sistematik.
OKTOBER
Menyukai orang yang sayang padanya.
Suka mengambil jalan tengah.
Sangat menawan dan sopan santun.
Kecantikan luar dan dalam.
Tidak pandai berbohong dan berpura-pura.
Mudah rasa simpati, baik, lebih mementingkan kawan.
Senantiasa berkawan.
Hatinya mudah terusik tetapi merajuknya tak lama.
Tidak menolong orang kecuali diminta.
Suka melihat dari perspektifnya sendiri.
Tidak suka terima pandangan orang lain.
Emosi yang mudah terusik.
Daya firasat yang sangat kuat (terutama perempuan).
Suka melancong, bidang sastra dan seni.
Pengasih, penyayang dan lemah lembut.
Romantik dalam percintaan.
Mudah terusik hati dan cemburu.
Suka kegiatan luar.
Orang yang adil.
Suka mengobrol.
Peramal yang hebat.
Orang yang Egois.
Gampang kehilangan percaya diri.
Boros dan mudah dipengaruhi sekitarnya.
NOVEMBER
Banyak ide.
Sukar untuk dimengerti atau difahami sikapnya.
Berpikiran unik dan bijak.
Penuh dengan idea-idea baru yang luar biasa.
Pemikiran yang tajam.
Daya firasat yang sangat halus dan tinggi.
Cermat dan teliti.
Sifat yang berahasia, pandai mengorek dan mencari rahasia.
Banyak berfikir, kurang bicara tetapi mesra.
Berani, pemurah, setia, dan sabar.
Apabila mau akan diusahakan sehingga berhasil.
Tidak suka marah kecuali digugat.
Cara berfikir yang lain dari orang lain.
Otak yang sangat tajam.
Pandai mendorong diri sendiri.
Tidak menghargai pujian.
Kasih sayang dan emosi yang sangat mendalam.
Romantik.
Tidak pasti dengan hubungan kasih sayang.
Suka duduk atau diam di rumah.
Sangat rajin dan berkemampuan tinggi.
Amanah, jujur setia dan pandai berahasia.
Perangai tidak dapat diramal dan mudah berubah-ubah.
DESEMBER
Sangat setia dan pemurah.
Bersifat patriotik.
Sangat aktif dalam permainan dan pergaulan.
Sikap kurang sabar dan tergesa-gesa.
Bercita-cita tinggi.
Suka menjadi orang yang berpengaruh dalam organisasi.
Suka dipuji, diberi perhatian dan suka dibelai.
Sangat jujur.
Tidak pandai berpura-pura.
Cepat marah.
Perangai yang berubah-ubah.
Tidak ego walaupun harga diri yang sangat tinggi.
Benci apabila dikekang.
Suka bergurau.
Pandai membuat lelucon dan berpikir dengan logika.
ambil yang baik2 aja yach.....!!!hehehehe
sumber: http://kask.us/3960276
Senin, 05 Juli 2010
Menghitung ukuran file gambar RGB
Langsung saja, apa sih yg mempengaruhi ukuran sebuah file gambar? Jawabannya adalah dimensi gambar (panjang dan lebar) dan kedalaman gambar (bit per pixel). Dimensi gambar sudah cukup jelas, berapa panjang dan lebar dari sebuah gambar dalam satuan pixel. Misalnya: 640×480, 800×600, 1024×768, dst.
Sedangkan kedalaman gambar artinya ruang yg disediakan untuk menampung informasi warna dalam satu pixel (Pixel adalah satuan terkecil dari dimensi gambar). Contoh: Kedalaman 24bit berarti dalam 1 pixel disediakan ruang sebanyak 24 bit untuk menampung warna. Berhubung kita bicara ruang warna RGB, artinya 24 bit ini dibagi 3. R (red) dapat 8 bit, G (green) dapat 8 bit, dan B (blue) memperoleh jatah 8 bit juga. Jadi 1 warna dalam sebuah pixel tersusun dari 3 komponen warna RGB. Begitu pula untuk kedalaman warna 16bit, 32bit, atau yg lainnya. 16 bit artinya 1 pixel perlu ruang 16 bit.
Okeh teori diatas cukup jelas saya kira, berikut adalah contohnya:
1. Misalnya kita punya file dengan ukuran 100×100 pixel, kedalaman 24 bit. Maka ukuran file aslinya adalah :
100x100x24 = 240000 bit = 240000/8 Byte = 30000 Byte = 29,30 KByte
2. Contoh laen sebuah file gambar dengan ukuran 1024×768, kedalaman 16bit. Maka ukuran file aslinya adalah :
1024x768x16 = 12582912 bit = 1572864/8 Byte = 1536 KByte = 1,5 MByte
File asli disini yang saya maksud adalah file asli/original dalam format BMP. Untuk format terkompresi misalnya JPG/JPEG hitungannya dah beda lagi. (semoga dimasa mendatang bisa nulis postingan tentang kompresi JPEG)
Sekian dulu pelajaran kita hari ini. InsyaAllah disambung lagi kapan-kapan..
Semoga bermanfaat…
NB :
1 MByte (Mega Byte) = 1024 KByte (Kilo Byte) = 1048576 Byte = 8388608 bit
1 KByte (Kilo Byte) = 1024 Byte = 8192 bit
1 Byte = 8 bit
Sedangkan kedalaman gambar artinya ruang yg disediakan untuk menampung informasi warna dalam satu pixel (Pixel adalah satuan terkecil dari dimensi gambar). Contoh: Kedalaman 24bit berarti dalam 1 pixel disediakan ruang sebanyak 24 bit untuk menampung warna. Berhubung kita bicara ruang warna RGB, artinya 24 bit ini dibagi 3. R (red) dapat 8 bit, G (green) dapat 8 bit, dan B (blue) memperoleh jatah 8 bit juga. Jadi 1 warna dalam sebuah pixel tersusun dari 3 komponen warna RGB. Begitu pula untuk kedalaman warna 16bit, 32bit, atau yg lainnya. 16 bit artinya 1 pixel perlu ruang 16 bit.
Okeh teori diatas cukup jelas saya kira, berikut adalah contohnya:
1. Misalnya kita punya file dengan ukuran 100×100 pixel, kedalaman 24 bit. Maka ukuran file aslinya adalah :
100x100x24 = 240000 bit = 240000/8 Byte = 30000 Byte = 29,30 KByte
2. Contoh laen sebuah file gambar dengan ukuran 1024×768, kedalaman 16bit. Maka ukuran file aslinya adalah :
1024x768x16 = 12582912 bit = 1572864/8 Byte = 1536 KByte = 1,5 MByte
File asli disini yang saya maksud adalah file asli/original dalam format BMP. Untuk format terkompresi misalnya JPG/JPEG hitungannya dah beda lagi. (semoga dimasa mendatang bisa nulis postingan tentang kompresi JPEG)
Sekian dulu pelajaran kita hari ini. InsyaAllah disambung lagi kapan-kapan..
Semoga bermanfaat…
NB :
1 MByte (Mega Byte) = 1024 KByte (Kilo Byte) = 1048576 Byte = 8388608 bit
1 KByte (Kilo Byte) = 1024 Byte = 8192 bit
1 Byte = 8 bit
Langganan:
Komentar (Atom)
