asebobeach.blogg.se

Finite State Automata And Image Recognition
finite state automata and image recognition





















Finite state has been used in many fields of computational linguistics. It is a well-established fact that each regular expression can be transformed into a nondeterministic finite automaton (NFA) with or without s-transitions, and all This study uses The Finite State Automata (FSA). Of automata recognition of geometric images of automatic models of systems Geometric image s of the laws of operation (see 10-12) (the next-state function : S×XS and output functions : S×X Y) of an initial finite state machine A s (S, X, Y, , ) with sets of states S, input signals X and output signals1Department of Information Technology, Yingtan Vocational and Technical College, Yingtan, China 2School of Information Technology, Jiangxi University of Finance and Economics, Nanchang, China.Briiggemann-Klem, A., Regular expressions into finite automata, Theoretical Compute Science 120 (1993) 197-213.

Some complicated conversion algorithms have also been in existence. Speech recognition, other language processing areas, and image processing.Keywords: Regular Grammar Finite Automata NFA DFAThe equivalence exists between regular grammar and finite automata in accepting languages. 1: FSA Stage 1Email: November 21 st, 2012 revised December 20 th, 2012 accepted December 31 st, 2012A finite automaton is ambiguous if it admits distinct accepting paths with. There are three stages to parsing the syllable as the Finite State diagram as follows: Fig.

finite state automata and image recognition

A DFA M is an automatic recognition device that is a quintuple denoted by , where each element in S indicates one state in present system ∑ denotes the set of conditions under which the system may happen δ is a single valued function from to S with indicating if the state of the current system is s 1 with an input a, there will be a transition from the current state to the successive one named s 2 s 0 is the very unique start state and F the set of final states.With δ, one can easily identify whether the condition in can be accepted by DFA or not. The definition of NFA and regular grammar as well as the subset-based construction algorithm from NFA to DFA can be easily found in. Some Equivalent Conversion Algorithms between Regular Grammar and Finite AutomataThe definition of DFA where some notations in the remainder of this paper are shown is given first.

It contains two cases, viz., one from right linear grammar and another from left linear grammar to finite automata.For a given right linear grammar , there is a corresponding NFAWhere f is a newly added final state with holding, the transition function δ is defined by the following rules.1) For any and , if holds, then let hold or 2) For any and , if holds, then let hold.Proof. Here is the construction algorithm from regular grammar to finite automata, and the proof of correctness. For each right linear grammar G R (or left linear grammar G L), there is one finite automata M where. If a regular grammar G describes the same language as that a finite automata M identifies, viz., , then G is equivalent to M.The following theorems are concerned about the equivalence between regular grammar and finite automata.Theorem 1. For some set ω where , if where holds, then we say that DFA can accept the condition set ω.Definition 2. That is to say, if the condition is ε, the current state is unchanged if the state is s and the condition aw, the system will first map δ(s, a) to s 1, then continue to map from s 1 until the last one.

For a given finite automata , a corresponding right linear grammar can be constructed. For each finite automata M, there is one right linear grammar G R or left linear grammar G L where. During the course of the transition, all the conditions met following one by one are just equal to ω, viz., if and only if Therefore, it is evident that holds.For a given left linear grammar , there is a corresponding NFAWhere q is a newly added start state with holding, the transition function δ is defined by the following rules.1) For any and , if holds, then let hold or 2) for any and , if holds, then let hold.The proof of construction Algorithm 2 is similar to that of construction algorithm 1 and we obtainTheorem 2. Here we let where , then S =>*ω if and only ifFor G R, therefore, the enough and necessary conditions of S =>*ω are that there is one path from S, the start state to f, the final state in M. In the last derivation, using A→a once is equal to the case that the current state A meeting with a will be transited to f, the final state in M.

So, a new generation rule s 1→s 0|ε is added to G R created from step 1) where s 1 is a newly added start symbol with the original symbol s 0 being no longer the start symbol any more and holding. From step 1) we know thatHolds. Or 2) if holds, then holds because of.

For any in G R, if s 0 =>*ω holds, let hold where , we have s 0=> ω 1s 1 = > ω 1ω 2s 2 => ∙∙∙ => ω 1 ∙∙∙ ω is i => ∙∙∙ => ω 1 ∙∙∙ ω n.That’s to say, s 0 = > *ω holds if and only if there is a path from s 0 meeting one by one to final states in M. Here B may be equal to s 0, and as long as B is a member of the set of final states, B→ε must be added.Proof. For any and 1) If holds, then let A→aB hold 2) If holds, then we add a generation rule B→ε. The following one named as Construction Algorithm 4, more easily understood, is its simplified version.For a given finite automata , a corresponding right linear grammar can be constructed. The Improved Version for Construction Algorithm 3Construction Algorithm 3 discussed above is complex in some sort.

If q = > *ω holds, we have q = > f = > s n − 1ω n = > s n −2ω n −1ω n = > ∙∙∙ = > s i −1ω i ∙∙∙ ω n = > ∙∙∙ = > s 0ω 1 ∙∙∙ ω n = > ω 1 ∙∙∙ ω n, and there is a transitionOf which each inverse step is corresponding to the one of the rightmost derivation above.There, holds if and only if holds, viz. Therefore, the rightmost derivation of q = > *ω is just the inverse chain of the path M transits from the very start state s 0 to the very final state f with all the conditions linked together in the path are just identical with ω.Let hold without thought where. For left linear grammar G L, using q→f once is equivalent to the case one of the original states meeting ε will be transited to q in M in the very beginning of the rightmost derivation of q = >*ω where during the course of the derivation, using B→Aa once is equivalent to the case the state A meeting a will be transited to the successive state B in M in the final step of the derivation, using once is equivalent to the case that the state s 0 meeting ε stops in s 0 in M. Let hold where Ψ is defined by the following rules.For any and 1) If holds, then let B→Aa hold 2) Add a generation rule s 0→ε and 3) For any , add a generation rule q→f.The rule 3) means that we add a new state q as the final state, and then link all the original final states which are no longer final ones to q through ε respectively in the state transition diagram of M.In particular, we can let hold when F, the set of final states, contains only one final state f where Ψ is defined by the following rules.Proof. It is obvious that Construction Algorithm 4 is much simpler than Construction Algorithm 3.The following Construction Algorithm 5 presented in this work as much as I know so far is an effective algorithm about the equivalent conversion from finite automata M to left linear grammar G L according to construction algorithm 4 its proof of correctness is also given.Let a given finite automata be , adding q, a new symbol, as the start symbol with holding.

Now we can construct a left linear grammarIn Figure 1, we can reduce G L to because of only one final state f here whereHolds. The state transition diagram of M is shown in Figure 1.

finite state automata and image recognition