#general


@octchristmas: I'm looking for best practices for choosing hardware (cpu, memory, storage). This is the only documentation I found. Of course I know it depends on my workload, but can someone give me a general hint or experience? Anything is fine.
  @mayanks: Like you said, it depends a lot on workload. Instead of giving a very generic answer that might mislead you, I’d say setup a single broker + server cluster and see how much you can squeeze out of it, and then simply scale up/out.
  @octchristmas: Thanks Mayank, I know your opinion is the best approach. However, if there is a best practice for selecting hardware for each type of workload, I will be able to choose hardware for my workload. Am I unable to find these best practices?
  @mayanks: Servers do most of the work, so you want to use higher cpu/mem/storage for server nodes. If you load 1 TB of data on each node, ensure you have SSD (ebs), perhaps at least 16 core/64GB memory. Don’t allocate too much heap space. If 64GB ram, we do 16GB xms=xmx.
  @mayanks: Brokers/Controllers don’t need local attached storage. Always use deepstore (or NAS/NFS if not on cloud).
  @mayanks: cc @mark.needham
@mattwdh: @mattwdh has joined the channel

#random


@mattwdh: @mattwdh has joined the channel

#troubleshooting


@mattwdh: @mattwdh has joined the channel

#pinot-dev


@mattwdh: @mattwdh has joined the channel

#getting-started


@mattwdh: @mattwdh has joined the channel
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@pinot.apache.org For additional commands, e-mail: dev-h...@pinot.apache.org

Reply via email to