Design & Architecture
Stay up to date with the latest & greatest
- 42 Topics
- 91 Replies
Hi Community, We have a distributed collector in a remote location. We have established a Site-to-site VPN between locations. The scenario is that the IP Address of the collector is in NAT and mapped to a different IP than that of the actual host IP. For E.g the system IP of collector is 172.29.20.80 and the IP of the collector as seen by the Remote Logpoint is 172.22.2.2. We have made the necessary configuration and ensured the Collector is visible in the logpoint. However, the IP as recorded by Logpoint is the actual system IP (Not the IP Logpoint should recognize it as). The issue is the status is Inactive stage. Is this due to the difference in host IP and NAT address?
Hello,I’m designing my backup. So far in the documentation, I’ve read two options: application snapshot and application backup, both are writing to the local disk.Let’s put aside the configuration backup as it’s less than 1 GB. The real challenge comes with backing up repos.In an on-prem infrastructure, backups are stored in the backup infrastructure, with VTL and so on. There’s no way I can request to double the size of the repo disk just to store a consistent backup that I will have, then, to transfer to the backup infrastructure.In a cloud infrastructure, the backup would go directly to the object storage such as S3 Glacier. Neither would we rent a disk space used only during backup, though it might be easier to do in a cloud environment.In addition to the backup and snapshot methods from the documentation, I should add the option of disk snapshot, either from the guest OS or from the disk array (only for on-prem infrastructure). These would provide a stable file system onto which t
HiToday I have a Python script for exporting devices in to a csv-file with the following fields:device_name,device_ips,device_groups,log_collection_policies,distributed_collector,confidentiality,integrity,availability,timezoneDoes a script exist that also extract the additional fiels:uses_proxy, proxy_ip, hostnameThis will make moving devices from LogPoint 5 to LogPoint 6 considerably more easy. RegardsHans
Wir haben einen Kunden, der das neue Feature zum Hochladen der SSL/TLS Zertifikate für den Syslog Collector über die Web Oberfläche genutzt hat. Does this have any effect on the certificates used by OpenVPN?Because currently, after configuring the Distributed LogPoint, we see in the OpenVPN client log (/opt/immune/var/log/service/remote_con_client_xx.xx.xx.xx/current) that the certificate cannot be verified:2022-01-04_11:12:48.10967 Tue Jan 4 11:12:48 2022 VERIFY ERROR: depth=0, error=unable to get local issuer certificate: XXX
Hello Guys,is it possible to use/create 16 repositories per LogPoint environment only?What if I like to separate my data in 30 different repositories for managment and access right purposes, is there a way to do that and are there benefits or drawbacks for this situation?Thanks in advance. BR,Sascha
Hi All, We are excited to share a new knowledge base article guiding you through the steps on how to use NFS storage as backup directory. You can access through the following link. https://servicedesk.logpoint.com/hc/en-us/articles/5068106299805-How-to-use-NFS-storage-as-backup-directory-
Hi folks,We’re hoping to add some SSL certificates to our LogPoint installation for the web interface, but just wanted to clarify some information. We currently have SSL certificates with a Root CA and 2 intermediaries - do these need to be combined into a fullchain certificate, or does each part need to be put somewhere else in the LogPoint installation?I saw this post with a reply from Nils referencing the CSR, which is great, but we just want to make sure that we’re putting the certificates in the right place.If the certificate files do need to go somewhere else, I’m assuming they can’t be uploaded via the web interface?
Hi,on my firewall I opened port 443 to destination customer.logpoint.com (188.8.131.52 and 184.108.40.206). Now I see on the firewall that the server tries to open connections to the ip addresses 220.127.116.11 and 18.104.22.168 through port 443. Are these connections also needed?Best regards,Hans Vedder
We use a large and growing number of self-developed alert rules for our customers, which we manage and develop further in an internal git repository via gitlab. For quality assurance in the continuous integration process, we still need a way to test the alert rules automatically.The idea is to check whether each alert rule triggers on the necessary events and behaves as expected in borderline cases. Very similar to unit testing in software development, just for alert rules instead of source code.Our idea so far is as follows:Connect an up-to-date LogPoint as a virtual machine as a QA system to our Director environment Create a snapshot of the "freshly installed" state Restore the snapshot via script from the gitlab CI pipeline Use the Director API to add a repo, routing policy, normalizer policy, processing policy for the different log types Use the Director API to add a device and syslog collector with the corresponding processing policy for each log type Use the Director API with our
Be aware if your are going to upgrade to 7.0 there are a bug in TimeChart function, and will not work.Answer from support: Hi Kai, We are extremely sorry for the inconvience caused by it, fixes has been applied in upcoming patch for 7.0.1 So if you need it, maybe you should wait to 7.0.1 are out. Regards Kai
While deploying LogPoint with High Availability repos I did a few tests scenarios on how HA behaves that I thought would be relevant to share.Repositories can be configured as high availability repositories which means that the data is replicated to another instance. This means that logs will be searchable in a couple of scenarios:First scenario, if the repo fails on the primary datanode (LP DN1) it will be able to search in the HA Repo_1 on the secondary datanode (LP DN2). This could for instance be that the disk was faulty or removed or the permissions on the path were set incorrect. This scenario can be seen in the picture below where the Repo 1 which is configured with HA, on the primary datanode (LP DN1) is unavailable, but still searchable as the secondary data node (LP DN2) still has the data in the HA Repo 1 repo. In this scenario the Repo 2 and Repo 3 are still searchable.First HA scenario where one HA repo failsIn the second scenario If the primary data node (LP DN1) is shutd
We are excited to announce that today we have released LogPoint 7.0.With LogPoint 7, SOAR is a native part of the SIEM, which means getting one out-of-the-box tool for the entire detection, investigation and response process.To learn more about LogPoint 7.0, access product documentation here: https://docs.logpoint.com/docs/install-and-upgrade-guide/en/latest/ or read our official press release here: https://www.logpoint.com/en/product-releases/streamline-security-operations-with-logpoint-7/Should you have any specific 7.0 questions, post it in the Community and we will do our best to address it asap :)
HiOne of the changes from LogPoint 5 to 6 I was exited to see implemented, was the support for session keepalive in the syslog collector.Most people do not think that much about it, but I would say that it is part of ensuring a stable operating environment.Doing a 'netstat -ano | grep 514' in the CLI you will probably get something like the below listed:(I have pasted in the headlines as well as they will not show using '| grep')Active Internet connections (servers and established)Proto Recv-Q Send-Q Local Address Foreign Address State Timer--------------tcp6 0 0 :::514 :::* LISTEN off (0.00/0/0)tcp6 0 0 172.20.20.20:514 172.20.10.107:50554 ESTABLISHED keepalive (7126.58/0/0)tcp6 0 0 172.20.20.20:514 172.20.10.100:51662 ESTABLISHED keepalive (7126.58/0/0)tcp6 0 0 172.20.20.20:514 172.25.20.42:50053 ESTABLISHED keepalive (7135.92/0/0)--------------
Im looking to import the various SCCM log files, but there doesnt appear to be a normaliser or an obvious way to do this. Has anyone else done this? I’m mainly interested in the last client communication an the last patching updates and attempts as part of a bigger piece of work to compare last communications from various different sources.
Hi, i am just wondering if there was any plans to include any form of case management with the product.We are currently commenting on incidents an a structure way to allow us to search back through them, but having the ability to save multiple logs which relate to an investigation for the purposes of escalation or handover or to even store outside of the various repos to have a log retention period would be so useful. I know that this is possible currently through exporting the logs out, but these take the raw logs out of logpoint which is not as useful. Without wanting to point to another vendor, LogRhythm have similar case management functionality which allows you to add certain logs into a case/investigation for ease.
In our internal research team we obsevered that it is of extremly high importance to have the logs of the client systems collected in your SIEM. Especially of the windows systems, in the best case with sysmon together with a sophisticated sysmon configuration.The majority of large scale “attacks” doesn’t utilize any strange “cyber hacking voodoo”, but uses simple “human naivity” as initial code execution trigger. Like a “mouse click” to “enable content” of a microsoft office document with VBA macros, which was delivered via email from the attackers. The following malware download, its execution, reconnaissance and lateral movement steps can be easily detected with a good sysmon configuration. And this in “real time”, before any harm was done or your IDS may throw alerts.The main issue is, that clients are typically flexible/mobile systems, which are connecting your enterprise network via different network IP ranges (several LANs, Wifi, VPN, WAN etc.).As the current logpoint design requ
Hi What are the contents of the backup you can take from the LogPoint GUI? I can see there is both a Configuration and a Logs backup, but what’s the content of the Configuration backup? Will it have e.g:Users Dashboards Applications Devices Norm/enrichment/routing policies
Already have an account? Login
Login to the community
Already a Partner or Customer? Login with your LogPoint Support credentials. Don‘t have a LogPoint Support account? Ask your local LogPoint Representative. Only visiting? Login with LinkedIn to gain read–access.LOGIN AS PARTNER OR CUSTOMER Login with LinkedIn
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.