博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Lucene内置很多的分词器工具包
阅读量:4187 次
发布时间:2019-05-26

本文共 23559 字,大约阅读时间需要 78 分钟。

HBase – Hadoop Database,是一个高性、高性能、面向列、可伸缩的,利用HBase技术可在廉价PC Server上搭建起大规模化集群。

Lucene内置很多的分词器工具包,几乎涵盖了全球所有的国家和地区,最近散仙,在搞多语言分词的一个处理,主要国家有西班牙,葡萄牙,德语,法语,意大利,其实这些语系都与英语非常类似,都是以空格为分割的语种。

那么首先,探讨下分词器的词形还原和词干提取的对搜索的意义?在这之前,先看下两者的概念:
词形还原(lemmatization),是把一个任何形式的语言词汇还原为一般形式(能表达完整语义),而词干提取
(stemming)是抽取词的词干或词根形式(不一定能够表达完整语义)。词形还原和词干提取是词形规范化的两类
重要方式,都能够达到有效归并词形的目的,二者既有联系也有区别
详细介绍,请参考
在电商搜索里,词干的抽取,和单复数的还原比较重要(这里主要针对名词来讲),因为这有关搜索的查准率,和查全率的命中,如果我们的分词器没有对这些词做过处理,会造成什么影响呢?那么请看如下的一个例子?
句子: i have two cats
分词器如果什么都没有做:
这时候我们搜cat,就会无命中结果,而必须搜cats才能命中到一条数据,而事实上cat和cats是同一个东西,只不过单词的形式不一样,这样以来,如果不做处理,我们的查全率和查全率都会下降,会涉及影响到我们的搜索体验,所以stemming这一步,在某些场合的分词中至关重要。
本篇,散仙,会参考源码分析一下,关于德语分词中中如何做的词干提取,先看下德语的分词声明:

Java代码  
  1. List<String> list=new ArrayList<String>();  
  2. list.add("player");//这里面的词,不会被做词干抽取,词形还原  
  3. CharArraySet ar=new CharArraySet(Version.LUCENE_43,list , true);  
  4. //分词器的第二个参数是禁用词参数,第三个参数是排除不做词形转换,或单复数的词  
  5. GermanAnalyzer sa=new GermanAnalyzer(Version.LUCENE_43,null,ar);  
List
list=new ArrayList
(); list.add("player");//这里面的词,不会被做词干抽取,词形还原 CharArraySet ar=new CharArraySet(Version.LUCENE_43,list , true); //分词器的第二个参数是禁用词参数,第三个参数是排除不做词形转换,或单复数的词 GermanAnalyzer sa=new GermanAnalyzer(Version.LUCENE_43,null,ar);

接着,我们具体看下,在德语的分词器中,都经过了哪几部分的过滤处理:

Java代码  
  1.  protected TokenStreamComponents createComponents(String fieldName,  
  2.      Reader reader) {  
  3.   //标准分词器过滤  
  4.    final Tokenizer source = new StandardTokenizer(matchVersion, reader);  
  5.    TokenStream result = new StandardFilter(matchVersion, source);  
  6. //转小写过滤  
  7.    result = new LowerCaseFilter(matchVersion, result);  
  8. //禁用词过滤  
  9.    result = new StopFilter( matchVersion, result, stopwords);  
  10. //排除词过滤  
  11.    result = new SetKeywordMarkerFilter(result, exclusionSet);  
  12.    if (matchVersion.onOrAfter(Version.LUCENE_36)) {  
  13. //在lucene3.6以后的版本,采用如下filter过滤  
  14.   //规格化,将德语中的特殊字符,映射成英语  
  15.      result = new GermanNormalizationFilter(result);  
  16.   //stem词干抽取,词性还原  
  17.      result = new GermanLightStemFilter(result);  
  18.    } else if (matchVersion.onOrAfter(Version.LUCENE_31)) {  
  19. //在lucene3.1至3.6的版本中,采用SnowballFilter处理  
  20.      result = new SnowballFilter(result, new German2Stemmer());  
  21.    } else {  
  22. //在lucene3.1之前的采用兼容的GermanStemFilter处理  
  23.      result = new GermanStemFilter(result);  
  24.    }  
  25.    return new TokenStreamComponents(source, result);  
  26.  }  
protected TokenStreamComponents createComponents(String fieldName,      Reader reader) {	  //标准分词器过滤    final Tokenizer source = new StandardTokenizer(matchVersion, reader);    TokenStream result = new StandardFilter(matchVersion, source);	//转小写过滤    result = new LowerCaseFilter(matchVersion, result);	//禁用词过滤    result = new StopFilter( matchVersion, result, stopwords);	//排除词过滤    result = new SetKeywordMarkerFilter(result, exclusionSet);    if (matchVersion.onOrAfter(Version.LUCENE_36)) {	//在lucene3.6以后的版本,采用如下filter过滤	  //规格化,将德语中的特殊字符,映射成英语      result = new GermanNormalizationFilter(result);	  //stem词干抽取,词性还原      result = new GermanLightStemFilter(result);    } else if (matchVersion.onOrAfter(Version.LUCENE_31)) {	//在lucene3.1至3.6的版本中,采用SnowballFilter处理      result = new SnowballFilter(result, new German2Stemmer());    } else {	//在lucene3.1之前的采用兼容的GermanStemFilter处理      result = new GermanStemFilter(result);    }    return new TokenStreamComponents(source, result);  }

OK,我们从源码中得知,在Lucene4.x中对德语的分词也做了向前和向后兼容,现在我们主要关注在lucene4.x之后的版本如何的词形转换,下面分别看下
     result = new GermanNormalizationFilter(result);
      result = new GermanLightStemFilter(result);
这两个类的功能:

Java代码  
  1. package org.apache.lucene.analysis.de;  
  2.   
  3. /* 
  4.  * Licensed to the Apache Software Foundation (ASF) under one or more 
  5.  * contributor license agreements.  See the NOTICE file distributed with 
  6.  * this work for additional information regarding copyright ownership. 
  7.  * The ASF licenses this file to You under the Apache License, Version 2.0 
  8.  * (the "License"); you may not use this file except in compliance with 
  9.  * the License.  You may obtain a copy of the License at 
  10.  * 
  11.  *     http://www.apache.org/licenses/LICENSE-2.0 
  12.  * 
  13.  * Unless required by applicable law or agreed to in writing, software 
  14.  * distributed under the License is distributed on an "AS IS" BASIS, 
  15.  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
  16.  * See the License for the specific language governing permissions and 
  17.  * limitations under the License. 
  18.  */  
  19.   
  20. import java.io.IOException;  
  21.   
  22. import org.apache.lucene.analysis.TokenFilter;  
  23. import org.apache.lucene.analysis.TokenStream;  
  24. import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;  
  25. import org.apache.lucene.analysis.util.StemmerUtil;  
  26.   
  27. /** 
  28.  * Normalizes German characters according to the heuristics 
  29.  * of the <a href="http://snowball.tartarus.org/algorithms/german2/stemmer.html"> 
  30.  * German2 snowball algorithm</a>. 
  31.  * It allows for the fact that ä, ö and ü are sometimes written as ae, oe and ue. 
  32.  *  
  33.  * [list] 
  34.  *   <li> 'ß' is replaced by 'ss' 
  35.  *   <li> 'ä', 'ö', 'ü' are replaced by 'a', 'o', 'u', respectively. 
  36.  *   <li> 'ae' and 'oe' are replaced by 'a', and 'o', respectively. 
  37.  *   <li> 'ue' is replaced by 'u', when not following a vowel or q. 
  38.  * [/list] 
  39.  * <p> 
  40.  * This is useful if you want this normalization without using 
  41.  * the German2 stemmer, or perhaps no stemming at all. 
  42.  *上面的解释说得很清楚,主要是对德文的一些特殊字母,转换成对应的英文处理 
  43.  * 
  44.  */  
  45.    
  46. public final class GermanNormalizationFilter extends TokenFilter {  
  47.   // FSM with 3 states:  
  48.   private static final int N = 0/* ordinary state */  
  49.   private static final int V = 1/* stops 'u' from entering umlaut state */  
  50.   private static final int U = 2/* umlaut state, allows e-deletion */  
  51.   
  52.   private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);  
  53.     
  54.   public GermanNormalizationFilter(TokenStream input) {  
  55.     super(input);  
  56.   }  
  57.   
  58.   @Override  
  59.   public boolean incrementToken() throws IOException {  
  60.     if (input.incrementToken()) {  
  61.       int state = N;  
  62.       char buffer[] = termAtt.buffer();  
  63.       int length = termAtt.length();  
  64.       for (int i = 0; i < length; i++) {  
  65.         final char c = buffer[i];  
  66.         switch(c) {  
  67.           case 'a':  
  68.           case 'o':  
  69.             state = U;  
  70.             break;  
  71.           case 'u':  
  72.             state = (state == N) ? U : V;  
  73.             break;  
  74.           case 'e':  
  75.             if (state == U)  
  76.               length = StemmerUtil.delete(buffer, i--, length);  
  77.             state = V;  
  78.             break;  
  79.           case 'i':  
  80.           case 'q':  
  81.           case 'y':  
  82.             state = V;  
  83.             break;  
  84.           case 'ä':  
  85.             buffer[i] = 'a';  
  86.             state = V;  
  87.             break;  
  88.           case 'ö':  
  89.             buffer[i] = 'o';  
  90.             state = V;  
  91.             break;  
  92.           case 'ü':   
  93.             buffer[i] = 'u';  
  94.             state = V;  
  95.             break;  
  96.           case 'ß':  
  97.             buffer[i++] = 's';  
  98.             buffer = termAtt.resizeBuffer(1+length);  
  99.             if (i < length)  
  100.               System.arraycopy(buffer, i, buffer, i+1, (length-i));  
  101.             buffer[i] = 's';  
  102.             length++;  
  103.             state = N;  
  104.             break;  
  105.           default:  
  106.             state = N;  
  107.         }  
  108.       }  
  109.       termAtt.setLength(length);  
  110.       return true;  
  111.     } else {  
  112.       return false;  
  113.     }  
  114.   }  
  115. }  
package org.apache.lucene.analysis.de;/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements.  See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License.  You may obtain a copy of the License at * *     http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */import java.io.IOException;import org.apache.lucene.analysis.TokenFilter;import org.apache.lucene.analysis.TokenStream;import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;import org.apache.lucene.analysis.util.StemmerUtil;/** * Normalizes German characters according to the heuristics * of the  * German2 snowball algorithm. * It allows for the fact that ä, ö and ü are sometimes written as ae, oe and ue. *  * [list] *   
  • 'ß' is replaced by 'ss' *
  • 'ä', 'ö', 'ü' are replaced by 'a', 'o', 'u', respectively. *
  • 'ae' and 'oe' are replaced by 'a', and 'o', respectively. *
  • 'ue' is replaced by 'u', when not following a vowel or q. * [/list] *

    * This is useful if you want this normalization without using * the German2 stemmer, or perhaps no stemming at all. *上面的解释说得很清楚,主要是对德文的一些特殊字母,转换成对应的英文处理 * */ public final class GermanNormalizationFilter extends TokenFilter { // FSM with 3 states: private static final int N = 0; /* ordinary state */ private static final int V = 1; /* stops 'u' from entering umlaut state */ private static final int U = 2; /* umlaut state, allows e-deletion */ private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class); public GermanNormalizationFilter(TokenStream input) { super(input); } @Override public boolean incrementToken() throws IOException { if (input.incrementToken()) { int state = N; char buffer[] = termAtt.buffer(); int length = termAtt.length(); for (int i = 0; i < length; i++) { final char c = buffer[i]; switch(c) { case 'a': case 'o': state = U; break; case 'u': state = (state == N) ? U : V; break; case 'e': if (state == U) length = StemmerUtil.delete(buffer, i--, length); state = V; break; case 'i': case 'q': case 'y': state = V; break; case 'ä': buffer[i] = 'a'; state = V; break; case 'ö': buffer[i] = 'o'; state = V; break; case 'ü': buffer[i] = 'u'; state = V; break; case 'ß': buffer[i++] = 's'; buffer = termAtt.resizeBuffer(1+length); if (i < length) System.arraycopy(buffer, i, buffer, i+1, (length-i)); buffer[i] = 's'; length++; state = N; break; default: state = N; } } termAtt.setLength(length); return true; } else { return false; } }}

  •  

    Java代码  
    1. package org.apache.lucene.analysis.de;  
    2.   
    3. /* 
    4.  * Licensed to the Apache Software Foundation (ASF) under one or more 
    5.  * contributor license agreements.  See the NOTICE file distributed with 
    6.  * this work for additional information regarding copyright ownership. 
    7.  * The ASF licenses this file to You under the Apache License, Version 2.0 
    8.  * (the "License"); you may not use this file except in compliance with 
    9.  * the License.  You may obtain a copy of the License at 
    10.  * 
    11.  *     http://www.apache.org/licenses/LICENSE-2.0 
    12.  * 
    13.  * Unless required by applicable law or agreed to in writing, software 
    14.  * distributed under the License is distributed on an "AS IS" BASIS, 
    15.  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
    16.  * See the License for the specific language governing permissions and 
    17.  * limitations under the License. 
    18.  */  
    19.   
    20. import java.io.IOException;  
    21.   
    22. import org.apache.lucene.analysis.TokenFilter;  
    23. import org.apache.lucene.analysis.TokenStream;  
    24. import org.apache.lucene.analysis.miscellaneous.SetKeywordMarkerFilter;  
    25. import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;  
    26. import org.apache.lucene.analysis.tokenattributes.KeywordAttribute;  
    27.   
    28. /** 
    29.  * A {@link TokenFilter} that applies {@link GermanLightStemmer} to stem German 
    30.  * words. 
    31.  * <p> 
    32.  * To prevent terms from being stemmed use an instance of 
    33.  * {@link SetKeywordMarkerFilter} or a custom {@link TokenFilter} that sets 
    34.  * the {@link KeywordAttribute} before this {@link TokenStream}. 
    35.  *  
    36.  
    37.  * 
    38.  * 
    39.  *这个类,主要做Stemmer(词干提取),而我们主要关注 
    40.  *GermanLightStemmer这个类的作用 
    41.  * 
    42.  * 
    43.  */  
    44. public final class GermanLightStemFilter extends TokenFilter {  
    45.   private final GermanLightStemmer stemmer = new GermanLightStemmer();  
    46.   private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);  
    47.   private final KeywordAttribute keywordAttr = addAttribute(KeywordAttribute.class);  
    48.   
    49.   public GermanLightStemFilter(TokenStream input) {  
    50.     super(input);  
    51.   }  
    52.     
    53.   @Override  
    54.   public boolean incrementToken() throws IOException {  
    55.     if (input.incrementToken()) {  
    56.       if (!keywordAttr.isKeyword()) {  
    57.         final int newlen = stemmer.stem(termAtt.buffer(), termAtt.length());  
    58.         termAtt.setLength(newlen);  
    59.       }  
    60.       return true;  
    61.     } else {  
    62.       return false;  
    63.     }  
    64.   }  
    65. }  
    package org.apache.lucene.analysis.de;/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements.  See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License.  You may obtain a copy of the License at * *     http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */import java.io.IOException;import org.apache.lucene.analysis.TokenFilter;import org.apache.lucene.analysis.TokenStream;import org.apache.lucene.analysis.miscellaneous.SetKeywordMarkerFilter;import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;import org.apache.lucene.analysis.tokenattributes.KeywordAttribute;/** * A {@link TokenFilter} that applies {@link GermanLightStemmer} to stem German * words. * 

    * To prevent terms from being stemmed use an instance of * {@link SetKeywordMarkerFilter} or a custom {@link TokenFilter} that sets * the {@link KeywordAttribute} before this {@link TokenStream}. * * * *这个类,主要做Stemmer(词干提取),而我们主要关注 *GermanLightStemmer这个类的作用 * * */public final class GermanLightStemFilter extends TokenFilter { private final GermanLightStemmer stemmer = new GermanLightStemmer(); private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class); private final KeywordAttribute keywordAttr = addAttribute(KeywordAttribute.class); public GermanLightStemFilter(TokenStream input) { super(input); } @Override public boolean incrementToken() throws IOException { if (input.incrementToken()) { if (!keywordAttr.isKeyword()) { final int newlen = stemmer.stem(termAtt.buffer(), termAtt.length()); termAtt.setLength(newlen); } return true; } else { return false; } }}

    下面看下,在GermanLightStemmer中,如何做的词干提取:源码如下:

    Java代码  
    1.  package org.apache.lucene.analysis.de;  
    2.   
    3. /* 
    4.  * Licensed to the Apache Software Foundation (ASF) under one or more 
    5.  * contributor license agreements.  See the NOTICE file distributed with 
    6.  * this work for additional information regarding copyright ownership. 
    7.  * The ASF licenses this file to You under the Apache License, Version 2.0 
    8.  * (the "License"); you may not use this file except in compliance with 
    9.  * the License.  You may obtain a copy of the License at 
    10.  * 
    11.  *     http://www.apache.org/licenses/LICENSE-2.0 
    12.  * 
    13.  * Unless required by applicable law or agreed to in writing, software 
    14.  * distributed under the License is distributed on an "AS IS" BASIS, 
    15.  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
    16.  * See the License for the specific language governing permissions and 
    17.  * limitations under the License. 
    18.  */  
    19.   
    20. /*  
    21.  * This algorithm is updated based on code located at: 
    22.  * http://members.unine.ch/jacques.savoy/clef/ 
    23.  *  
    24.  * Full copyright for that code follows: 
    25.  */  
    26.   
    27. /* 
    28.  * Copyright (c) 2005, Jacques Savoy 
    29.  * All rights reserved. 
    30.  * 
    31.  * Redistribution and use in source and binary forms, with or without  
    32.  * modification, are permitted provided that the following conditions are met: 
    33.  * 
    34.  * Redistributions of source code must retain the above copyright notice, this  
    35.  * list of conditions and the following disclaimer. Redistributions in binary  
    36.  * form must reproduce the above copyright notice, this list of conditions and 
    37.  * the following disclaimer in the documentation and/or other materials  
    38.  * provided with the distribution. Neither the name of the author nor the names  
    39.  * of its contributors may be used to endorse or promote products derived from  
    40.  * this software without specific prior written permission. 
    41.  *  
    42.  * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"  
    43.  * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE  
    44.  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE  
    45.  * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE  
    46.  * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR  
    47.  * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF  
    48.  * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS  
    49.  * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN  
    50.  * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)  
    51.  * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 
    52.  * POSSIBILITY OF SUCH DAMAGE. 
    53.  */  
    54.   
    55. /** 
    56.  * Light Stemmer for German. 
    57.  * <p> 
    58.  * This stemmer implements the "UniNE" algorithm in: 
    59.  * <i>Light Stemming Approaches for the French, Portuguese, German and Hungarian Languages</i> 
    60.  * Jacques Savoy 
    61.  */  
    62. public class GermanLightStemmer {  
    63.     
    64.   //处理特殊字符映射  
    65.   public int stem(char s[], int len) {     
    66.     for (int i = 0; i < len; i++)  
    67.       switch(s[i]) {  
    68.         case 'ä':  
    69.         case 'à':  
    70.         case 'á':  
    71.         case 'â': s[i] = 'a'break;  
    72.         case 'ö':  
    73.         case 'ò':  
    74.         case 'ó':  
    75.         case 'ô': s[i] = 'o'break;  
    76.         case 'ï':  
    77.         case 'ì':  
    78.         case 'í':  
    79.         case 'î': s[i] = 'i'break;  
    80.         case 'ü':   
    81.         case 'ù':   
    82.         case 'ú':  
    83.         case 'û': s[i] = 'u'break;  
    84.       }  
    85.       
    86.     len = step1(s, len);  
    87.     return step2(s, len);  
    88.   }  
    89.     
    90.     
    91.   private boolean stEnding(char ch) {  
    92.     switch(ch) {  
    93.       case 'b':  
    94.       case 'd':  
    95.       case 'f':  
    96.       case 'g':  
    97.       case 'h':  
    98.       case 'k':  
    99.       case 'l':  
    100.       case 'm':  
    101.       case 'n':  
    102.       case 't'return true;  
    103.       defaultreturn false;  
    104.     }  
    105.   }  
    106.   //处理基于以下规则的词干抽取和缩减  
    107.   private int step1(char s[], int len) {  
    108.     if (len > 5 && s[len-3] == 'e' && s[len-2] == 'r' && s[len-1] == 'n')  
    109.       return len - 3;  
    110.       
    111.     if (len > 4 && s[len-2] == 'e')  
    112.       switch(s[len-1]) {  
    113.         case 'm':  
    114.         case 'n':  
    115.         case 'r':  
    116.         case 's'return len - 2;  
    117.       }  
    118.       
    119.     if (len > 3 && s[len-1] == 'e')  
    120.       return len - 1;  
    121.       
    122.     if (len > 3 && s[len-1] == 's' && stEnding(s[len-2]))  
    123.       return len - 1;  
    124.       
    125.     return len;  
    126.   }  
    127.   //处理基于以下规则est,er,en等的词干抽取和缩减  
    128.   private int step2(char s[], int len) {  
    129.     if (len > 5 && s[len-3] == 'e' && s[len-2] == 's' && s[len-1] == 't')  
    130.       return len - 3;  
    131.       
    132.     if (len > 4 && s[len-2] == 'e' && (s[len-1] == 'r' || s[len-1] == 'n'))  
    133.       return len - 2;  
    134.       
    135.     if (len > 4 && s[len-2] == 's' && s[len-1] == 't' && stEnding(s[len-3]))  
    136.       return len - 2;  
    137.       
    138.     return len;  
    139.   }  
    140. }  
    package org.apache.lucene.analysis.de;/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements.  See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License.  You may obtain a copy of the License at * *     http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. *//*  * This algorithm is updated based on code located at: * http://members.unine.ch/jacques.savoy/clef/ *  * Full copyright for that code follows: *//* * Copyright (c) 2005, Jacques Savoy * All rights reserved. * * Redistribution and use in source and binary forms, with or without  * modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above copyright notice, this  * list of conditions and the following disclaimer. Redistributions in binary  * form must reproduce the above copyright notice, this list of conditions and * the following disclaimer in the documentation and/or other materials  * provided with the distribution. Neither the name of the author nor the names  * of its contributors may be used to endorse or promote products derived from  * this software without specific prior written permission. *  * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"  * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE  * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE  * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR  * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF  * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS  * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN  * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)  * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. *//** * Light Stemmer for German. * 

    * This stemmer implements the "UniNE" algorithm in: * Light Stemming Approaches for the French, Portuguese, German and Hungarian Languages * Jacques Savoy */public class GermanLightStemmer { //处理特殊字符映射 public int stem(char s[], int len) { for (int i = 0; i < len; i++) switch(s[i]) { case 'ä': case 'à': case 'á': case 'â': s[i] = 'a'; break; case 'ö': case 'ò': case 'ó': case 'ô': s[i] = 'o'; break; case 'ï': case 'ì': case 'í': case 'î': s[i] = 'i'; break; case 'ü': case 'ù': case 'ú': case 'û': s[i] = 'u'; break; } len = step1(s, len); return step2(s, len); } private boolean stEnding(char ch) { switch(ch) { case 'b': case 'd': case 'f': case 'g': case 'h': case 'k': case 'l': case 'm': case 'n': case 't': return true; default: return false; } } //处理基于以下规则的词干抽取和缩减 private int step1(char s[], int len) { if (len > 5 && s[len-3] == 'e' && s[len-2] == 'r' && s[len-1] == 'n') return len - 3; if (len > 4 && s[len-2] == 'e') switch(s[len-1]) { case 'm': case 'n': case 'r': case 's': return len - 2; } if (len > 3 && s[len-1] == 'e') return len - 1; if (len > 3 && s[len-1] == 's' && stEnding(s[len-2])) return len - 1; return len; } //处理基于以下规则est,er,en等的词干抽取和缩减 private int step2(char s[], int len) { if (len > 5 && s[len-3] == 'e' && s[len-2] == 's' && s[len-1] == 't') return len - 3; if (len > 4 && s[len-2] == 'e' && (s[len-1] == 'r' || s[len-1] == 'n')) return len - 2; if (len > 4 && s[len-2] == 's' && s[len-1] == 't' && stEnding(s[len-3])) return len - 2; return len; }}

    具体的分析结果如下:

    Java代码  
    1. 搜索技术交流群:324714439  
    2. 大数据hadoop交流群:376932160  
    3.   
    4. 0,将一些德语特殊字符,替换成对应的英文表示  
    5. 1,将所有词干元音还原 a ,o,i,u  
    6. ste(2)(按先后顺序,符合以下任意一项,就完成一次校验(return))  
    7. 2,单词长度大于5的词,以ern结尾的,直接去掉  
    8. 3,单词长度大于4的词,以em,en,es,er结尾的,直接去掉  
    9. 4,单词长度大于3的词,以e结尾的直接去掉  
    10. 5,单词长度大于3的词,以bs,ds,fs,gs,hs,ks,ls,ms,ns,ts结尾的,直接去掉s  
    11. step(3)(按先后顺序,符合以下任意一项,就完成一次校验(return))  
    12. 6,单词长度大于5的词,以est结尾的,直接去掉  
    13. 7,单词长度大于4的词,以er或en结尾的直接去掉  
    14. 8,单词长度大于4的词,bst,dst,fst,gst,hst,kst,lst,mst,nst,tst,直接去掉后两位字母st  
    搜索技术交流群:324714439大数据hadoop交流群:3769321600,将一些德语特殊字符,替换成对应的英文表示1,将所有词干元音还原 a ,o,i,uste(2)(按先后顺序,符合以下任意一项,就完成一次校验(return))2,单词长度大于5的词,以ern结尾的,直接去掉3,单词长度大于4的词,以em,en,es,er结尾的,直接去掉4,单词长度大于3的词,以e结尾的直接去掉5,单词长度大于3的词,以bs,ds,fs,gs,hs,ks,ls,ms,ns,ts结尾的,直接去掉sstep(3)(按先后顺序,符合以下任意一项,就完成一次校验(return))6,单词长度大于5的词,以est结尾的,直接去掉7,单词长度大于4的词,以er或en结尾的直接去掉8,单词长度大于4的词,bst,dst,fst,gst,hst,kst,lst,mst,nst,tst,直接去掉后两位字母st

    最后,结合网上资料分析,基于er,en,e,s结尾的是做单复数转换的,其他的几条规则主要是对非名词的单词,做词干抽取。

     

    转载地址:http://vrjoi.baihongyu.com/

    你可能感兴趣的文章
    解决vmware下拷贝主机后不识别eth0网卡
    查看>>
    Promise简单实践
    查看>>
    vue中无缝轮播简单实现
    查看>>
    ES5和ES6中的类定义区别
    查看>>
    利用解构赋值快速提取对象参数
    查看>>
    CSS3简单实现360deg旋转
    查看>>
    vue中使用H5的audio
    查看>>
    PHPStorm配置ESlint检查代码
    查看>>
    树的子结构
    查看>>
    判断两棵二叉树是否相似
    查看>>
    二叉树中和为某一值的路径
    查看>>
    数字在排序数组中出现的次数
    查看>>
    两个链表的第一个公共结点
    查看>>
    二叉树的深度
    查看>>
    MySQL数据库入门(三)
    查看>>
    MySQL数据库入门(四)
    查看>>
    关于方法覆盖和属性覆盖的问题?
    查看>>
    JAVA中ListIterator和Iterator详解
    查看>>
    目标和
    查看>>
    跳跃游戏
    查看>>