Skip to main content

Tutorial: Troubleshoot Log Streaming

Abstract

Here are the frequently asked questions for Log Streaming and troubleshooting.

Here are some FAQs that might help you troubleshoot issues with Log Streaming.

1.

What can I do if I get an error when I click the Test Message button?

Check and verify the configuration of your AWS S3 bucket.

Carry out the following checks:

  1. Check that the AWS S3 bucket name entered in the Log Streaming configuration matches the name shown on the AWS console.

  2. Check that the AWS S3 bucket Region entered in the Log Streaming configuration matches the Region shown on the AWS console.

  3. Check that access policies on your AWS S3 bucket correspond to the ones in the AWS S3 bucket configuration tutorial.

  4. Check that you can create a new file on the AWS S3 bucket.

If all the above checks passed but didn’t resolve the issue, create a support ticket with the details and screenshots of the AWS S3 configuration and bucket policies here.

2.

Why can’t I Resume streaming when the stream is in a Pausing state?

The stream can be in the Pausing state for quite a long time. Wait for the stream to switch to an Error state and resume.

During the Pausing and Error states, the already sent and future messages will be preserved.

3.

Why can’t I Pause the stream when it is in the Provisioning state?

The setup of the log stream is being carried out during the Provisioning state; therefore, it cannot be paused. The Pausing operation can be performed after the Provisioning is complete. Wait some time for the stream to switch to Active or Error state.

4.

Why are no files created on the AWS S3 bucket for a long time?

The AWS S3 bucket files are created only when events are to be written. Events are written to the AWS S3 bucket with a maximum of 60 seconds delay.

You can use the Test Message button to check the AWS S3 configuration by immediately sending a Test Message. The Test message is written to the S3 bucket without any delays. Also, check the stream toggles as they define what events are written to the AWS S3 bucket. Check that at least one of them is ON.

5.

What are the strange IDs for the initiator and the parentEntity entity?

The IDs in the initiator and the parentEntity fields are system-wide IDs of Users, Hosts, Networks, Devices, and Connectors.

You can use CloudConnexa API to get details of the entity using the ID. For example, the username for a User entity or device name for a Device entity. Use initiatorType and parentEntityType fields to select the proper API call for data retrieval. Also, you can use CloudConnexa API and these IDs to conduct actions on an entity, like revoking a certificate.

6.

How do I remove the configuration for the AWS S3 bucket?

To remove your configuration, go to the editing view and press the Delete Configuration button. Refer to Delete Log Streaming Configuration

7.

Why is the stream status not changing for a long time?

Use the Refresh button on the Log streaming page to retrieve the current stream status.

8.

How can I monitor the status changes of the Log streaming?

The Log Streaming status changes by default will be emailed to the Owner and all Administrators of the Cloud Connexa account. They can change the email preferences on the Log streaming page and in the Notifications section. Refer to Alert Notifications for Log Streaming.

9.

How can I unsubscribe from the Log streaming status changes emails?

Any Administrator can change Log Streaming email preferences on the Log Streaming page and in the Notifications section. Refer to Alert Notifications for Log Streaming.

10.

What is the difference between Paused and Error states?

In the Paused state, Log Streaming stops collecting logs from Cloud Connexa. On enabling Log Streaming again, only new messages generated after being unpaused will be sent to the AWS S3 bucket.

In the Error state, Log Streaming continues gathering and caching events for 24 hours. This means that after recovering from the Error state, the messages generated during the last 24 hours would be delivered to the AWS S3 bucket.