save-buffer commented on a change in pull request #12537: URL: https://github.com/apache/arrow/pull/12537#discussion_r829327429
########## File path: cpp/src/arrow/compute/exec/tpch_node.h ########## @@ -0,0 +1,71 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +#pragma once + +#include <string> +#include <vector> +#include "arrow/compute/exec/exec_plan.h" +#include "arrow/compute/exec/options.h" +#include "arrow/result.h" +#include "arrow/status.h" +#include "arrow/type.h" +#include "arrow/util/pcg_random.h" + +namespace arrow { +namespace compute { +class OrdersAndLineItemGenerator; +class PartAndPartSupplierGenerator; + +class ARROW_EXPORT TpchGen { + public: + /* + * \brief Create a factory for nodes that generate TPC-H data + * + * Note: Individual tables will reference each other. It is important that you only + * create a single TpchGen instance for each plan and then you can create nodes for each + * table from that single TpchGen instance. Note: Every batch will be scheduled as a new + * task using the ExecPlan's scheduler. + */ + static Result<TpchGen> Make(ExecPlan* plan, float scale_factor = 1.0f, + int64_t batch_size = 4096); Review comment: our current batch size of 32k is way way way too large; i hope to address it at some point in the near-ish future. in short, scheduling should happen on segments on the order of 1 million rows, and these segments should be broken up into batches of somewhere between 1024 and 4096 rows (ideally you want it to fit into L1 cache, which is 32 kb). with this being at 4096, it will at least stay cache-resident for the full execution pipeline, even if scheduling overhead is a bit high. later once we introduce separation between a batch and a unit of scheduling, we may tweak this 4096, but it's in the right ballpark for now. once that is implemented, it will be so that when we schedule the ExecNode, it will make 1M / 4096 calls to the batch producer and run the pipeline each small batch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
