Neural Network for Unicode Visual Character Recognition |
||||
|
|
||||
|
||||
BibTeX: |
||||
|
@article{IJIRSTV4I4020, |
||||
Abstract: |
||||
|
The central aim of this paper is demonstrating the capability of Artificial Neural Network implementations in recognizing extended sets of visual language symbols. The applications of this technique range from document digitizing and protection to handwritten text recognition in handheld devices. The classic difficulty of being able to correctly recognize even typed visual language symbols is the complex irregularity among pictorial representations of the same character due to variations in fonts, styles and size. This indiscretion undoubtedly widens when one deals with handwritten characters. Hence the conventional programming methods of mapping symbol images into matrices, analyzing pixel and/or vector data and trying to decide which symbol corresponds to which quality would give up little or no realistic results. Clearly the needed method will be one that can detect ‘proximity’ of graphic representations to known symbols and make decisions based on this proximity. To implement such proximity algorithms in the conventional programming one needs to write endless code, one for each type of possible irregularity or deviation from the assumed output either in terms of pixel or vector parameters, clearly not a realistic fare. One such network with supervised education rule is the Multi-Layer Perception (MLP) model. It uses the universal Delta Learning Rule for adjusting its weights and can be trained for a set of input/desired output values in a number of iterations. The very nature of this particular model is that it will force the production to one of nearby values if a variation of input is fed to the network that it is not trained for, thus solving the proximity issue. Both concepts will be discussed in the introduction part of this report. |
||||
Keywords: |
||||
|
Neural Networks, Visual Language, Multi-Layer |
||||



