Logpoint normalises during ingestion - so once an event has been ingested and not normalised, it will stay that way. There are in-line process commands you can use during a search (such as norm, norm on, regex etc.) to process logs “on-the-fly” after the fact if need be, but that’s not reapplying the normaliser.
One good approach could be to use the universal normaliser - it can process JSON events “out of the box”, but can then further process/rename etc. the field names that JSON provides. There is some GUI functionality to copy/paste an example message to see how it gets processed (just like we have for the traditional regex-based normalisers, which are pretty useless for structured formats like JSON), and that might get you closer to a working normalisation before the next message arrives - but that is ultimately the test.
For something super complicated we have our internal “Logfaker” plugin that could be used to “inject” messages into a device from a simple text file with example data, in which case you wouldn’t need to wait for that exact event to occur again before testing the new normalisation - Support could probably make that available if need be. But hopefully that won’t be necessary.
Hi again
Thanks for the fast reply. I see.
I’ve heard about Universal Normalizer before. For the moment I’m only using Compiled & Normalization Package that Is built in.
For these JSON-logs I only use JSONCompiled Normalizer. Would you recommend using the Universal Normalizer instead? Then I have to download that .pak file and so on.
Thanks
The Universal Normaliser should ship with 7.2.1 onwards, and that is the minimum required version. If you are working with JSON data then it can be a useful tool, specifically in cases where you’re not happy with the actual JSON data - for example, JSON encodes the field names, and the regular JSON normaliser just takes them as-is - so if there is a JSON Field called sender_ip, then that’s what the field name will be in Logpoint. Of course you might want that called source_address to keep with the Logpoint taxonomy, and the Universal Normaliser can do just that - process the JSON and then transform things as required. It can also further parse the field - for example if you get a access_path field, you could split that into a drive and a directory field etc.
Of course that all depends on the JSON data to begin with - it still needs to be valid JSON to work, and if you’re not particularly concerned with the structure of the data then the regular JSON normaliser should work too. You should theoretically never have a “JSON event that did not normaliser” once you have a normalisation policy with either the regular JSON normaliser or with the universal normaliser. If it’s valid JSON, they should both normalise it. The difference is just in the how and to what.
All right! Not running that version. But I will look into “Universal Normalizer”, would be good. I’m gonna work with some JSON-logs this autumn.
I actually got Impressed and got a good result with the JSONCompiled Normalizer, got all the relevant data normalised In a good way.
The JSON logs from the particular source Is quite simple and only contains some attributes.
What’s your thoughts on Normalizations Packages. What’s best practise. Is It fine to combine a log source that only helds JSON-logs into a Windows Normalization Package that helds all different Windows related Compiled Normalizers and Normalization Packages?
Or should I create a separate Normalization Package that only helds the JSONCompiledNormalizer for that log source that sends JSON-event?
Thanks
If you’re working with JSON logs, all you can do is change the order of the compiled normalisers in the normalisation policy. Custom Normalisation PACKAGES can only be created as non-compiled normalisers, i.e. Regex based. You might mean combine them into a normalisation POLICY instead of PACKAGE - that’s fine, just make sure that you put the normalisers that will be most commonly used at the top. That means that in 90% of cases Logpoint will never have to try the others, and will only try them for what slips through.
But if your device will only ever send your custom logs, then it’s probably best to just have a different normalisation policy just for that.
Thanks Nils for your answers.