Normalization & Parsing
Stay up to date with the latest & greatest
- 30 Topics
- 49 Replies
Hi All,We are excited to share the release of the new Universal REST API Fetcher. The Universal REST API fetcher provides a generic interface to fetch logs from cloud sources via REST APIs. The cloud sources can have multiple endpoints, and every configured source consumes one device license.For more details, please see the links below:Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/6047943636253-Universal-REST-API-FetcherDocumentation: https://docs.logpoint.com/docs/universal-rest-api/en/latest/
Dear All, We are happy to share that we have released CSV Enrichment Source v5.2.0 publicly.The CSV Enrichment Source application enables you to use a CSV file as an enrichment source in LogPoint. The application fetches data feeds from a CSV file and enriches search results with the data. For further information, please visit the link below:https://servicedesk.logpoint.com/hc/en-us/articles/115003786109For detailed information about the implementation in Logpoint products, please refer to the articles below:Logpoint: https://docs.logpoint.com/docs/csvenrichmentsource/en/latest/ Director API: https://docs.logpoint.com/docs/csvenrichmentsource-for-director-console-api/en/latest/ Director Console: https://docs.logpoint.com/docs/csvenrichmentsource-for-director-console-ui/en/latest/
Receiving logs is one of the cure features of having a SIEM solution but in some cases logs are not received as required. In our newest KB article, we are diving into how to monitor log sources using Logpoint alerts to detect no logs being received on Logpoint within a certain time range.To read the full article, please see the link below: https://servicedesk.logpoint.com/hc/en-us/articles/5734141307933-Detecting-devices-that-are-not-sending-logs-
The `file_keeper` service, primarily used for storing raw logs and then forwarding them to be indexed by the `indexsearcher` is often used in its default configuration. However in some real life situations this might not be sufficient to deal with the type, and volume of logs being ingested into LogPoint, hence tuning is required. In our newest KB article, we´re gonna guide you through how exactly to do it.For more details, please read the full article on the link below: https://servicedesk.logpoint.com/hc/en-us/articles/5794306067101-Understanding-file-keeper-Working-and-Configuration
Hi All, We´re happy to share that we have released the following applications on the Help Center: Experimental Median Quartile Quantile Plugin v5.0.0 Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/115003890489-Experimental-Median-Quartile-Quantile-Plugin Vulnerability Management v6.1.1: Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/360014082658 Lookup Process plugin v5.1.0: Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/115005632749-Lookup-Process-Plugin Logpoint Agent Collector V5.2.2 Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/360020035977 Universal REST API Fetcher v1.0.0 Help Center: https://servicedesk.logpoint.com/hc/en-us/articles/6047943636253-Universal-REST-API-v1-0-0 All applications have been bundled in Logpoint v7.1.0 and are available out of the box.
Hi All,Our latest KB article discusses common issues where logs seem normalized ( both norm_id and sig_id are present), but some problems prevent them from being used in analytics.To read the full article, please follow the link below: https://servicedesk.logpoint.com/hc/en-us/articles/5830850414493-Common-Issues-in-Normalized-logs
Hi All,Sometimes we face an issue like an alert not being triggered or a dashboard widget not being populated. There could be many possible reasons. Among them, one is a huge time gap between log_ts and col_ts. In this article, we will be discussing some of the possible causes and sharing tips and tricks to solve this. Please see the link to the article below :) https://servicedesk.logpoint.com/hc/en-us/articles/4791434110877-Resolving-timestamp-related-issues-in-Normalization
Hi All,We are delighted to share our latest KB article addressing the difference between two fields, log_ts (event creation time) and col_ts (event ingestion time in log point) in logs and how they can alter the expected behavior of logpoint service. You can access the article via the below link:https://servicedesk.logpoint.com/hc/en-us/articles/5348407972893-Delayed-Logs-and-its-uncertainity-with-LogPoint
Hello, since we replaced the PaloAlto firewall devices a couple of days ago (the old ones were running PanOS 9.1.7, the new ones are on 10.1.4) for one of our customers, none of the logs coming from the firewalls are normalized anymore (there are 1000s of logs in the repo, but doing a search query “ norm_id="*" “ shows no result). We are using the same policies (collection, normalization etc) as before, and the firewall admin says that they just migrated the configuration fromt ht eold to the new devices and can not see any changes regarding the log configuration settings. I already restarted all normalizer services, even rebooted the LP and completely recreated the device configuration. We are using the latest (5.2.0) PaloAlto Application plugin on LogPoint 6.12.2, and its details clearly state that PanPOS 10 is supported (Palo Alto Network Firewall – ServiceDesk # LogPoint). And taking a look at the raw logs, i can not see any differenc in the log format of PanOS 9 and 10. However,
We recently added a FortiMail appliance as a log source to one of our logpoints and now see an issue during collection and normalization.It seems that FortiMail is sending the log messages without separating the single messages with a newline or NULL-termination or something else. Thus the syslog_collector is reading from the socket until the maximum buffer length is exceeded.So we get a maximum length raw log message (10k characters, which then breaks in between a log message), which contains up to 30 or 40 single log messages, which are written one after the other. The normalizer then normalizes only the first message and discards the rest.Here a shortened example of how this looks like:550 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]553 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]479 <6>date=2022-06-20 time=07:24:11.992 device_id=[...]324 <6>date=2022-06-20 time=07:24:12.279 device_id=[...] Is there a way to resolve this issue?
We recently noticed that some Azure EventHubs Applications (e.g. the Azure AD Identity Protection -> https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection) are setting the "time" field not in the ISO 8601 Datetime format, but in the "general date long time" format (see https://docs.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings#GeneralDateLongTime).Thus the month and day field seem to be mixed up in these cases, and e.g. events that were actually collected on 6th of april (according to col_ts) are sorted into the repos on 4th of june (because of the wrong log_ts).Also alert rules on these events are then triggering months later, when the accidentally wrongly sorted events slip into the current window of the search time range.The following screenshots shows how the timestamp format of the Azure AD Identity Protection differs from the usual ISO 8601 format.Do you know if it is somehow possible to
We configured our McAfee ePO (5.10) server to send its logs to a syslog server and configured it in the LP accordingly. Yet, when using the “Test Syslog” Feature in McAfee ePO, the test failed. Nonetheless, we are receiving logs from the server, but they only contain gibberish. LP raw logs This is as far as i think not a problem with normalization, as a tcpdump also shows the log payload not being human readable.tcpdumpI already tried to change the charset min the log m,collection policy from utf_8 to iso8559_15 and ascii, to no avail. I found following McAfee (KB87927) document, which says:ePO syslog forwarding only supports the TCP protocol, and requires Transport Layer Security (TLS). Specifically, it supports receivers following RFC 5424 and RFC 5425, which is known as syslog-ng. You do not need to import the certificate used by the syslog receiver into ePO. As long as the certificate is valid, ePO accepts it. Self-signed certificates are supported and are commonly used for this
Hi.We are trying to push multi-line logs to Logpoint, for example a stack trace.They are created by Java applications like Jboss, Tomcat and few more. Where we have some debug information in logs such as content of XML messages processes by the system etc.When such logs are displayed in Logpoint, we need to preserve the line breaks along with indentation to make them readable by a human.Can you please show a complete recipe on how to achieve that?I saw this topichttps://community.logpoint.com/normalization-parsing-43/multi-line-parser-147and understood that there are some pre-compiled normalizers which can be used, can you please explain how they gonna work and how exactly we need to: 1. send logs to Logpoint 2. process logs in logpointIn order to be able to present properly formatted (line breaks and indentation) logs for users who will look for the logs ? Thanks
In the default configuration the syslog_collector process only accepts messages (log lines) with a maximum of 10000 bytes (characters). This results in truncated messages and thus they will not be normalized correctly. Especially powershell script blocks may contain important information, but generate very long log messages.Unfortunately this is a fixed value in the syslog_collector binary.At least the c code is avialable in the system and you can adjust the values and compile the binary again.For this you need sudo/root access.sudo -i # become rootcd /opt/immune/installed/col/apps/collector_c/syslog_collector/cp syslog_collector.h syslog_collector.h.bak # create a backup of the filenano syslog_collector.hchange the value here in this line: compile the syslog collector using:/opt/immune/bin/envdo make clean/opt/immune/bin/envdo makesv restart /opt/immune/etc/service/syslog_collector/ # restart the serviceIt would be a great feature to be able to set this value within the web UI.
It would be great if there were some means to automatically select the respective normalizers automatically. This would reduce the implementation overhead and also help us select the best available normalizers. We could leave a process to analyze the logs and find the normalizers it requires at the start of the implementation and allow it some time to process.What are the limitations/drawback for doing so?
Already have an account? Login
Login to the community
Already a Partner or Customer? Login with your LogPoint Support credentials. Don‘t have a LogPoint Support account? Ask your local LogPoint Representative. Only visiting? Login with LinkedIn to gain read–access.LOGIN AS PARTNER OR CUSTOMER Login with LinkedIn
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.