一尘不染

如何从文本文件行解析logstash / grok中的json?

elasticsearch

我有一个看起来像这样的日志文件(简化)

Logline样本

MyLine data={"firstname":"bob","lastname":"the builder"}

我想提取 数据中 包含的json 并创建两个字段,一个用于名字,一个用于姓氏。但是,我得到的输出是这样的:

{"message":"Line data={\"firstname\":\"bob\",\"lastname\":\"the builder\"}\r","@version":"1","@timestamp":"2015-11-26T11:38:56.700Z","host":"xxx","path":"C:/logstashold/bin/input.txt","MyWord":"Line","parsedJson":{"firstname":"bob","lastname":"the builder"}}

如你看到的

..."parsedJson":{"firstname":"bob","lastname":"the builder"}}

那不是我所需要的,我需要在kibana中为firstname和lastname创建字段,但是logstash不会使用json过滤器提取字段。

LogStash配置

input {
  file {
        path => "C:/logstashold/bin/input.txt"        
       }
}

filter {

   grok {
            match => { "message" => "%{WORD:MyWord} data=%{GREEDYDATA:request}"}        
        }

    json{
        source => "request"
        target => "parsedJson"
        remove_field=>["request"]
    }   
}

output {  
    file{
        path => "C:/logstashold/bin/output.txt"
    }   
}

非常感谢任何帮助,我敢肯定我错过了一些简单的东西

谢谢


阅读 1202

收藏
2020-06-22

共1个答案

一尘不染

json过滤器之后,添加另一个名为的过滤器mutate,以添加将从该parsedJson字段中获取的两个字段。

filter {
  ...
  json {
     ...
  }
  mutate {
    add_field => {
      "firstname" => "%{[parsedJson][firstname]}"
      "lastname" => "%{[parsedJson][lastname]}"
    }
  }
}

对于上面的示例日志行,将得出:

{
       "message" => "MyLine data={\"firstname\":\"bob\",\"lastname\":\"the builder\"}",
      "@version" => "1",
    "@timestamp" => "2015-11-26T11:54:52.556Z",
          "host" => "iMac.local",
        "MyWord" => "MyLine",
    "parsedJson" => {
        "firstname" => "bob",
         "lastname" => "the builder"
    },
     "firstname" => "bob",
      "lastname" => "the builder"
}
2020-06-22