news.

Correcting Grammar Mistakes

When we write, it is almost inevitable for most of us to make spelling mistakes. Fortunately, spell checkers are there to help us: They can detect spelling mistakes with a high accuracy (even though uncommon words or names are sometimes incorrectly marked as errors) and can mostly suggest a sensible correction for a word. It is probably safe to say that spell checkers are the most commonly used natural language processing tools.

However, another class of writing mistakes is much more difficult to detect and correct: grammar mistakes. People tend to make relatively few grammar mistakes in their first (mother) language. However, for people writing in a language other than their first language, this is a significant issue as there are big differences in the grammar and the usage of words between different languages. A large proportion of internet users are writing in English, though English is their second or third language. Grammar correction tools can be of great help to these users to help them communicate efficiently.

There has recently been an increase in research to develop automatic methods for grammatical error correction on writing by second language speakers. The most common type of errors here are article and preposition errors. Subject-verb agreement errors, verb form errors and noun number errors also occur often. Here are a few examples of real world grammar errors, along with corrections:

Article errors:
Population is large while the resources are limited
The population is large while resources are limited

Preposition errors:
They may die for hunger
They may die from hunger

Verb form errors:
He has never being there before
He has never been there before

Subject-verb agreement errors:
People needs a safe environment to live in
People need a safe environment to live in

Noun number errors:
We cannot apply it to our daily life
We cannot apply it to our daily lives

Combination of errors:
This will help to prevent the family to loss their member
This will help to prevent families from losing their members 

How can we detect and correct these errors automatically? A simple approach is to use local word context information to try predict what the correct word in a given position should be. However, this will not always work, due to the existence of long-distance dependencies between words that should be taken into account to decide what the correct word choice is. It is clear that the grammatical structure of a sentence (the syntax) plays an important role: If there is an irregularity in the syntax of a sentence, then one can try to find a correct grammatical structure that it close to the original structure, and modify the sentence accordingly.

The approach I am working on for my research is syntax-based. This means that the syntactic structure of a sentence (see How to model Natural Language) is used to find possible corrections to that sentence. This model is formulated in terms of rules. Each rule consists of a correct word or phrase and a corresponding (possibly incorrect) incorrect word or phrase. These phrases can also include syntactic elements (place holders for phrases that have some syntactic meaning).

Here are some (slightly simplified) examples of rules. NP indicates “noun phrase”, VP is “verb phrase”, PP is “prepositional phrase”, JJ is an adjective, NN is singular noun and NNS is a plural noun.

NN -> the NN
the NN -> NN
for -> from
has being there -> has been there
NNS needs -> NNS need
to loss NP -> from losing NP
our JJ life -> our JJ lives

The basic idea is to extract these rules from examples of real grammar errors. Then we estimate probabilities for these rules to get some indication of how likely a word or phrase is to be changed to another phrase. Then, for a given sentence that we want to correct, the system searches through possible syntactic analyses and corrections of the sentence. As there are many possibilities that need to be considered, this can be a quite slow process, and some engineering is required to make this tractable. The model also includes a language model, which assigns probabilities to sentences based on the occurrence of words and phrases in the training data.

How well does this work? For error correction systems, this is measured with precision and recall. Precision is the proportion of edits suggested that correct a mistake, and recall is the proportion of corrections that should be performed that was found by the system. The F1 score is the geometric average of these two scores. There is a tradeoff between precision and recall: If a system suggests more corrections, the recall will increase but the precision will decrease. On the other hand, if the system only make corrections that it has a high confidence in, the precision will be higher, but the recall lower.

Grammar correction is a difficult task. Only recently has sufficiently large datasets become publicly available to allow for the consistent evaluation of different methods. There are large differences between the judgements of speakers of a language on whether a sentence is grammatically correct or not. In standardized evaluations, the best systems can obtain an F1 score of around 40% (60% precision and 30% recall). However, there is definitely still room for improvement. I am confident that, in time, we will reach a stage where these models are useful and able to correct most serious grammatical errors.

No comments yet.

Leave a comment

Leave a Reply

(required)