kibana - Filebeat / Logstash initial ingestion + continued workload -


i new elk stack , trying make things bit easier, have few fuelphp instances have own app logs within them. trying aggregate logs elk can searched on , visualised.

i have set filebeat process on server in question, , set separate server logstash , elasticsearch. seems working until starts harverster each application log file. have 6 projects have years worth of logs in format:

/yyyy/mm/d.log 

i trying ingest of them start with, watching latest, keep hitting too many files open issues, due registry holding store, when files closed due inactivity, if make adjustment config , restart reopen again.

here snippet of filebeat.yml:

# list of prospectors fetch data. filebeat.prospectors: # each - prospector. options can set @ prospector level, # can use different prospectors various configurations. # below prospector specific configurations.  # type of files. based on way file read decided. # different types cannot mixed in 1 prospector # # possible options are: # * log: reads every line of log file (default) # * stdin: reads standard in  #------------------------------ log prospector -------------------------------- - input_type: log    # paths should crawled , fetched. glob based paths.   # fetch ".log" files specific level of subdirectories   # /var/log/*/*.log can used.   # each file found under path, harvester started.   # make sure not file defined twice can lead unexpected behaviour.   paths:     - /srv/**/production/fuel/app/logs/**/**/*.php    # exclude lines. list of regular expressions match. drops lines   # matching regular expression list. include_lines called before   # exclude_lines. default, no lines dropped.   exclude_lines: ['^<\?php ']    # set true store additional fields top level fields instead   # of under "fields" sub-dictionary. in case of name conflicts   # fields added filebeat itself, custom fields overwrite default   # fields.   fields_under_root: false    # regexp pattern has matched. example pattern matches lines starting [   multiline.pattern: '^[a-z]+ - '    # defines if pattern set under pattern should negated or not. default false.   multiline.negate: true    # match can set "after" or "before". used define if lines should append pattern   # (not) matched before or after or long pattern not matched based on negate.   # note: after equivalent previous , before equivalent to next in logstash   multiline.match: after    # defines if prospectors enabled   enabled: true output.logstash:   # boolean flag enable or disable output module.   enabled: true    # logstash hosts   hosts: ["<ip>:5044"]    # number of workers per logstash host.   worker: 4    # set gzip compression level.   compression_level: 3 

should manually ingesting older logs , setting ignore_older param ongoing entries? sorry not sure.

many thanks


Comments

Popular posts from this blog

javascript - Clear button on addentry page doesn't work -

c# - Selenium Authentication Popup preventing driver close or quit -

tensorflow when input_data MNIST_data , zlib.error: Error -3 while decompressing: invalid block type -