Normalization & Parsing
Stay up to date with the latest & greatest
- 36 Topics
- 74 Replies
Hi !Just a interesting question. I know that other SIEM vendors have problem with this. Maybe LogPoint have a good function for this. So I received a JSON-event that didn’t normalise, due to that no normalization-package was enabled. I enabled this after I received the event. So to my question. Is It possible to parse this event afterwards so that It gets normalized? Or do I have to wait for another event from the same logsource to see If this one gets normalized?
I see that there are no Vendor Apps for Kubernetes, so normalization are maybee going to be written, but how do you get logs to Logpoint, are there a nativ way for this.I found that Auditing of logs are not default turned on, and if they are they only reside for 1 hour. Any one with some god advise in the matter ?Regards Kai
Hi!I’m curious into how to collect logs from SCCM. Logs related to endpoint protection, virus alarms, quarantind threats etc.Found out that nxlog provides a configuration file for this. Missing some fields in the configuration file, example <Output out_syslog>. To point out the syslog dst. Microsoft System Center Configuration Manager :: NXLog DocumentationHas anyone any experience about this?Thankful for replies.
Hi All,We are excited to share the release of the new Universal REST API Fetcher. The Universal REST API fetcher provides a generic interface to fetch logs from cloud sources via REST APIs. The cloud sources can have multiple endpoints, and every configured source consumes one device license.For more details, please see the links below:Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/6047943636253-Universal-REST-API-FetcherDocumentation: https://docs.logpoint.com/docs/universal-rest-api/en/latest/
Hi!Been struggling with the normalization of Cisco Firepower logs, were I expect better normalization and a better enrichment. The syslog is configured from the Firepower Management Center.Everything should be correct in LogPoint were we’ve put in all the normalization policys for the log source. Compiled Normalizer:- Cisco FirepowerNormalizer- CiscoPIXASACompiledNormalizerNormalization Packages:- LP_Cisco Firepower- LP_Cisco Fiirepower Management Center- LP_Cisco Fiirepower Management Center v6_2- LP_Cisco PIX/ASA Generic- LP_Cisco PIXASAIs there any problem with the format syslog? Had the same issue with CheckPoint FW, but this got solved when we changed the format to CEF. Only the problem that Cisco Firepower only support the format syslog.Is there someone that has any tips on how to move on forward with this?
Hi.We are trying to push multi-line logs to Logpoint, for example a stack trace.They are created by Java applications like Jboss, Tomcat and few more. Where we have some debug information in logs such as content of XML messages processes by the system etc.When such logs are displayed in Logpoint, we need to preserve the line breaks along with indentation to make them readable by a human.Can you please show a complete recipe on how to achieve that?I saw this topichttps://community.logpoint.com/normalization-parsing-43/multi-line-parser-147and understood that there are some pre-compiled normalizers which can be used, can you please explain how they gonna work and how exactly we need to: 1. send logs to Logpoint 2. process logs in logpointIn order to be able to present properly formatted (line breaks and indentation) logs for users who will look for the logs ? Thanks
Dear All, We are happy to share that we have released CSV Enrichment Source v5.2.0 publicly.The CSV Enrichment Source application enables you to use a CSV file as an enrichment source in LogPoint. The application fetches data feeds from a CSV file and enriches search results with the data. For further information, please visit the link below:https://servicedesk.logpoint.com/hc/en-us/articles/115003786109For detailed information about the implementation in Logpoint products, please refer to the articles below:Logpoint: https://docs.logpoint.com/docs/csvenrichmentsource/en/latest/ Director API: https://docs.logpoint.com/docs/csvenrichmentsource-for-director-console-api/en/latest/ Director Console: https://docs.logpoint.com/docs/csvenrichmentsource-for-director-console-ui/en/latest/
Dear All,We are excited to announce the public release of Universal Normalizer v5.0.0.Why is this great news? Universal Normalizer enables you to create Customer Compiled Normalizers for a range of supported log formats by yourself with just a few simple steps and no waiting time. The supported log formats currently include: JSON CSV XML CEF LEEF Key-value pair To read more about Universal Normalizer v.5.0.0., please follow the links below:Dowload: https://servicedesk.logpoint.com/hc/en-us/articles/8874831748253Documentation: https://docs.logpoint.com/docs/universalnormalizer/en/latest/#universal-normalizer
Hi All, The Logpoint Agent Collector v5.2.3 has been released publicly. For more information, please visit the links below. Release notes: https://servicedesk.logpoint.com/hc/en-us/articles/360020035977 Documentation: https://docs.logpoint.com/docs/logpoint-agent/en/latest/
Receiving logs is one of the cure features of having a SIEM solution but in some cases logs are not received as required. In our newest KB article, we are diving into how to monitor log sources using Logpoint alerts to detect no logs being received on Logpoint within a certain time range.To read the full article, please see the link below: https://servicedesk.logpoint.com/hc/en-us/articles/5734141307933-Detecting-devices-that-are-not-sending-logs-
The `file_keeper` service, primarily used for storing raw logs and then forwarding them to be indexed by the `indexsearcher` is often used in its default configuration. However in some real life situations this might not be sufficient to deal with the type, and volume of logs being ingested into LogPoint, hence tuning is required. In our newest KB article, we´re gonna guide you through how exactly to do it.For more details, please read the full article on the link below: https://servicedesk.logpoint.com/hc/en-us/articles/5794306067101-Understanding-file-keeper-Working-and-Configuration
Hi All, We´re happy to share that we have released the following applications on the Help Center: Experimental Median Quartile Quantile Plugin v5.0.0 Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/115003890489-Experimental-Median-Quartile-Quantile-Plugin Vulnerability Management v6.1.1: Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/360014082658 Lookup Process plugin v5.1.0: Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/115005632749-Lookup-Process-Plugin Logpoint Agent Collector V5.2.2 Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/360020035977 Universal REST API Fetcher v1.0.0 Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/6047943636253-Universal-REST-API-v1-0-0 All applications have been bundled in Logpoint v7.1.0 and are available out of the box.
Hi All,Our latest KB article discusses common issues where logs seem normalized ( both norm_id and sig_id are present), but some problems prevent them from being used in analytics.To read the full article, please follow the link below: https://servicedesk.logpoint.com/hc/en-us/articles/5830850414493-Common-Issues-in-Normalized-logs
Hi All,Sometimes we face an issue like an alert not being triggered or a dashboard widget not being populated. There could be many possible reasons. Among them, one is a huge time gap between log_ts and col_ts. In this article, we will be discussing some of the possible causes and sharing tips and tricks to solve this. Please see the link to the article below :) https://servicedesk.logpoint.com/hc/en-us/articles/4791434110877-Resolving-timestamp-related-issues-in-Normalization
Hi All,We are delighted to share our latest KB article addressing the difference between two fields, log_ts (event creation time) and col_ts (event ingestion time in log point) in logs and how they can alter the expected behavior of logpoint service. You can access the article via the below link:https://servicedesk.logpoint.com/hc/en-us/articles/5348407972893-Delayed-Logs-and-its-uncertainity-with-LogPoint
Hello, since we replaced the PaloAlto firewall devices a couple of days ago (the old ones were running PanOS 9.1.7, the new ones are on 10.1.4) for one of our customers, none of the logs coming from the firewalls are normalized anymore (there are 1000s of logs in the repo, but doing a search query “ norm_id="*" “ shows no result). We are using the same policies (collection, normalization etc) as before, and the firewall admin says that they just migrated the configuration fromt ht eold to the new devices and can not see any changes regarding the log configuration settings. I already restarted all normalizer services, even rebooted the LP and completely recreated the device configuration. We are using the latest (5.2.0) PaloAlto Application plugin on LogPoint 6.12.2, and its details clearly state that PanPOS 10 is supported (Palo Alto Network Firewall – ServiceDesk # LogPoint). And taking a look at the raw logs, i can not see any differenc in the log format of PanOS 9 and 10. However,
We recently added a FortiMail appliance as a log source to one of our logpoints and now see an issue during collection and normalization.It seems that FortiMail is sending the log messages without separating the single messages with a newline or NULL-termination or something else. Thus the syslog_collector is reading from the socket until the maximum buffer length is exceeded.So we get a maximum length raw log message (10k characters, which then breaks in between a log message), which contains up to 30 or 40 single log messages, which are written one after the other. The normalizer then normalizes only the first message and discards the rest.Here a shortened example of how this looks like:550 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]553 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]479 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]324 <6>date=2022-06-20 time=07:24:12.279 device_id=[...] Is there a way to resolve this issue?
We recently noticed that some Azure EventHubs Applications (e.g. the Azure AD Identity Protection -> https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection) are setting the "time" field not in the ISO 8601 Datetime format, but in the "general date long time" format (see https://docs.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings#GeneralDateLongTime).Thus the month and day field seem to be mixed up in these cases, and e.g. events that were actually collected on 6th of april (according to col_ts) are sorted into the repos on 4th of june (because of the wrong log_ts).Also alert rules on these events are then triggering months later, when the accidentally wrongly sorted events slip into the current window of the search time range.The following screenshots shows how the timestamp format of the Azure AD Identity Protection differs from the usual ISO 8601 format.Do you know if it is somehow possible to
We configured our McAfee ePO (5.10) server to send its logs to a syslog server and configured it in the LP accordingly. Yet, when using the “Test Syslog” Feature in McAfee ePO, the test failed. Nonetheless, we are receiving logs from the server, but they only contain gibberish. LP raw logs This is as far as i think not a problem with normalization, as a tcpdump also shows the log payload not being human readable.tcpdumpI already tried to change the charset min the log m,collection policy from utf_8 to iso8559_15 and ascii, to no avail. I found following McAfee (KB87927) document, which says:ePO syslog forwarding only supports the TCP protocol, and requires Transport Layer Security (TLS). Specifically, it supports receivers following RFC 5424 and RFC 5425, which is known as syslog-ng. You do not need to import the certificate used by the syslog receiver into ePO. As long as the certificate is valid, ePO accepts it. Self-signed certificates are supported and are commonly used for this
Already have an account? Login
Login to the community
Already a Partner or Customer? Login with your LogPoint Support credentials. Don‘t have a LogPoint Support account? Ask your local LogPoint Representative. Only visiting? Login with LinkedIn to gain read–access.LOGIN AS PARTNER OR CUSTOMER Login with LinkedIn
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.