How are PUs assigned/distributed in azure event hub premium tier ?

ARAVETI, MAHESH 0 Reputation points
2024-04-05T22:04:29.98+00:00

Hi

We have 15+ eventhub in a namespace, each namespace has varying partitions, the problem we are having are seeing is we are able to pull only 1TB per an hour, which the limits as per eventhub as higher than these. We are consuming using logstash kafka.

Any insight on tuning this for a better performance is appreciated.

Thanks

Mahesh

Azure Event Hubs
Azure Event Hubs
An Azure real-time data ingestion service.
573 questions
{count} votes

1 answer

Sort by: Most helpful
  1. PRADEEPCHEEKATLA-MSFT 80,491 Reputation points Microsoft Employee
    2024-04-08T03:20:02.8+00:00

    @ARAVETI, MAHESH - Thanks for the question and using MS Q&A platform.

    To answer your first question, in the Azure Event Hubs Premium tier, processing units (PUs) are assigned to a namespace and are shared across all event hubs in that namespace. The number of PUs assigned to a namespace determines the maximum throughput capacity of the namespace. You can increase the number of PUs assigned to a namespace to increase the maximum throughput capacity.

    Regarding your second question, there are a few things you can do to improve the performance of your Logstash Kafka consumer:

    • Increase the number of partitions for your event hubs: The number of partitions determines the maximum number of concurrent readers that can read from an event hub. By increasing the number of partitions, you can increase the maximum number of concurrent readers and improve the overall throughput of your event hub.
    • Increase the number of PUs assigned to your namespace: As I mentioned earlier, the number of PUs assigned to a namespace determines the maximum throughput capacity of the namespace. By increasing the number of PUs, you can increase the maximum throughput capacity and improve the overall performance of your event hub.
    • Optimize your Logstash Kafka consumer configuration: Make sure that your Logstash Kafka consumer is configured to use the optimal settings for your use case. This includes settings such as the batch size, the maximum number of messages to fetch per poll, and the maximum number of concurrent fetches.
    • Monitor your event hub and Logstash Kafka consumer: Use Azure Monitor to monitor the performance of your event hub and Logstash Kafka consumer. This will help you identify any bottlenecks or performance issues and take corrective action.

    Hope this helps. Do let us know if you any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.