Review the following settings in the Kafka Broker category, and modify as needed:
-
log.roll.hours
-
The maximum time, in hours, before a new log segment is rolled out. The default value is 168 hours (seven days).
This setting controls the period of time after which Kafka will force the log to roll, even if the segment file is not full. This ensures that the retention process is able to delete or compact old data.
-
log.retention.hours
-
The number of hours to keep a log file before deleting it. The default value is 168 hours (seven days).
When setting this value, take into account your disk space and how long you would like messages to be available. An active consumer can read quickly and deliver messages to their destination.
The higher the retention setting, the longer the data will be preserved. Higher settings generate larger log files, so increasing this setting might reduce your overall storage capacity.
-
log.dirs
-
A comma-separated list of directories in which log data is kept. If you have multiple disks, list all directories under each disk.
Review the following setting in the Advanced kafka-broker category, and modify as needed:
-
log.retention.bytes
-
The amount of data to retain in the log for each topic partition. By default, log size is unlimited.
Note that this is the limit for each partition, so multiply this value by the number of partitions to calculate the total data retained for the topic.
If log.retention.hours
and log.retention.bytes
are both set, Kafka deletes a segment when either limit is exceeded.
-
log.segment.bytes
-
The log for a topic partition is stored as a directory of segment files. This setting controls the maximum size of a segment file before a new segment is rolled over in the log. The default is 1 GB.
Log Flush Management
Kafka writes topic messages to a log file immediately upon receipt, but the data is initially buffered in page cache. A log flush forces Kafka to flush topic messages from page cache, writing the messages to disk.
We recommend using the default flush settings, which rely on background flushes done by Linux and Kafka. Default settings provide high throughput and low latency, and they guarantee recovery through the use of replication.
If you decide to specify your own flush settings, you can force a flush after a period of time, or after a specified number of messages, or both (whichever limit is reached first). You can set property values globally and override them on a per-topic basis.
There are several important considerations related to log file flushing:
-
Durability: unflushed data is at greater risk of loss in the event of a crash. A failed broker can recover topic partitions from its replicas, but if a follower does not issue a fetch request or consume from the leader's log-end offset within the time specified by replica.lag.time.max.ms
(which defaults to 10 seconds), the leader removes the follower from the in-sync replica ("ISR"). When this happens there is a slight chance of message loss if you do not explicitly set log.flush.interval.messages
. If the leader broker fails and the follower is not caught up with the leader, the follower can still be under ISR for those 10 seconds and messages during leader transition to follower can be lost.
-
Increased latency: data is not available to consumers until it is flushed (the fsync
implementation in most Linux filesystems blocks writes to the file system).
-
Throughput: a flush operation is typically an expensive operation.
-
Disk usage patterns are less efficient.
-
Page-level locking in background flushing is much more granular.
log.flush.interval.messages
specifies the number of messages to accumulate on a log partition before Kafka forces a flush of data to disk.
log.flush.scheduler.interval.ms
specifies the amount of time (in milliseconds) after which Kafka checks to see if a log needs to be flushed to disk.
log.segment.bytes
specifies the size of the log file. Kafka flushes the log file to disk whenever a log file reaches its maximum size.
log.roll.hours
specifies the maximum length of time before a new log segment is rolled out (in hours); this value is secondary to log.roll.ms
. Kafka flushes the log file to disk whenever a log file reaches this time limit.
FAQs
Log Configuration provides options for online configuring the severities of log controllers in the whole system or in a certain system instance. To access the tool, open SAP NetWeaver Administrator and then choose Troubleshooting Logs and Traces Log Configuration .
How do I check log settings in FortiGate? ›
Go to Log & Report > Log Settings. Determine the activities that generate the most log entries: Check all logs to ensure important information is not overlooked.
How long does FortiGate store logs? ›
Enable to store logs on the unit's disk. Enabling disk logging is required to produce data for all FortiView consoles. Logs older than 7 days are deleted from the disk.
Where are the portal logs in ArcGIS? ›
The default directory where the portal writes its logs is <Portal for ArcGIS installation directory>/usr/arcgisportal/logs. The log path must be set to a local directory on the portal machine. When specifying the logs directory, keep the location at the root level of your portal.
What is log level settings? ›
A log level is set up as an indicator within your log management system that captures the importance and urgency of all entries within the logs. They can alert you if certain events require your immediate attention or if you can continue on with your day.
What is the system log used for? ›
System Log (syslog): a record of operating system events. It includes startup messages, system changes, unexpected shutdowns, errors and warnings, and other important processes. Windows, Linux, and macOS all generate syslogs.
How do I check my firewall logs? ›
You can find the log at: C:\Windows\System32\LogFiles\Firewall . By default, the log is named pfirewall.
What is Fortinet log? ›
A log message records the traffic passing through FortiGate to your network and the action FortiGate takes when it scans the traffic. You should log as much information as possible when you first configure FortiOS.
What is traffic logs in FortiGate? ›
The FortiGate firewall must generate traffic log entries containing information to establish the source of the events, such as the source IP address at a minimum.
How do I collect logs from FortiGate firewall? ›
Log in to the FortiGate GUI with Super-Admin privilege.
- Click Log and Report.
- Click Forward Traffic, or Local Traffic.
- Double-click on an Event to view Log Details.
- Verify traffic log events contain source and destination IP addresses, and interfaces.
After this information is recorded in a log message, it is stored in a log file that is stored on a log device (a central storage location for log messages). FortiGate supports sending all log types to several log devices, including FortiAnalyzer, FortiAnalyzer Cloud, FortiGate Cloud, and syslog servers.
What is reliable logging in FortiGate? ›
Reliable logging to FortiAnalyzer prevents lost logs when the connection between FortiOS and FortiAnalyzer is disrupted.
Where are access logs? ›
Access and error log files are stored on individual web servers. By default on most Linux systems, you can find Apache logs at /var/log/apache , /var/log/apache2 , or /var/log/httpd . Similarly, NGINX logs are often located at /var/log/nginx by default.
What is portal in ArcGIS? ›
Portal for ArcGIS is a component of ArcGIS Enterprise that allows you to share maps, scenes, apps, and other geographic information with other people in your organization. The content that you share is delivered through a website. You can customize the website to fit your organization's look and feel.
Where are ArcGIS Datastore logs stored? ›
Access log files
Once the data store is created and registered to a GIS Server site, logs for each data store machine are written to the <ArcGIS Data Store directory>\logs\<machine name> directory (default location is C:\arcgisdatastore\logs).
What is logger config? ›
The config() method of a Logger class used to Log an config message. This method is used to pass config types logs to all the registered output Handler objects. Config Level: Configuration Information may be like what CPU the application is running on, how much is the disk and memory space.
Why is log base 10 the default? ›
Because we have a base 10 number system, it made sense to use base 10 logarithms. These are also n=known as common logs.
What is the standard format for logs? ›
The JSON (JavaScript Object Notation) is a highly readable data-interchange format that has established itself as the standard format for structured logging. It is compact and lightweight, and simple to read and write for humans and machines.
Is log base 10 just log? ›
A common logarithm, Log10(), uses 10 as the base and a natural logarithm, Log(), uses the number e (approximately 2.71828) as the base.