es-ik - ES上使用IK中文分词器
MIT
跨平台
Java
软件简介
IK中文分词器在Elasticsearch上的使用。原生IK中文分词是从文件系统中读取词典,es-
ik本身可扩展成从不同的源读取词典。目前提供从sqlite3数据库中读取。es-ik-plugin-sqlite3使用方法:
1. 在elasticsearch.yml中设置你的sqlite3词典的位置:
ik_analysis_db_path: /opt/ik/dictionary.db
我提供了默认的词典:https://github.com/zacker330/es-ik-sqlite3-dictionary
2. 安装(目前是1.0.1版本)
./bin/plugin -i ik-analysis -u https://github.com/zacker330/es-ik-plugin-sqlite3-release/raw/master/es-ik-sqlite3-1.0.1.zip
3. 现在可以测试了:
1. 创建index
curl -X PUT -H "Cache-Control: no-cache" -d '{
"settings":{
"index":{
"number_of_shards":1,
"number_of_replicas": 1
}
}
}' 'http://localhost:9200/songs/'
2. 创建map:
curl -X PUT -H "Cache-Control: no-cache" -d '{
"song": {
"_source": {"enabled": true},
"_all": {
"indexAnalyzer": "ik_analysis",
"searchAnalyzer": "ik_analysis",
"term_vector": "no",
"store": "true"
},
"properties":{
"title":{
"type": "string",
"store": "yes",
"indexAnalyzer": "ik_analysis",
"searchAnalyzer": "ik_analysis",
"include_in_all": "true"
}
}
}
}
' 'http://localhost:9200/songs/_mapping/song'
3.
curl -X POST -d '林夕为我们作词' 'http://localhost:9200/songs/_analyze?analyzer=ik_analysis'
response:
{"tokens":[{"token":"林夕","start_offset":0,"end_offset":2,"type":"CN_WORD","position":1},{"token":"作词","start_offset":5,"end_offset":7,"type":"CN_WORD","position":2}]}