comaniac commented on a change in pull request #7835:
URL: https://github.com/apache/tvm/pull/7835#discussion_r612611761
##########
File path: docker/with_the_same_user
##########
@@ -36,8 +36,8 @@ else
rm /this_is_writable_file_system
fi
-getent group "${CI_BUILD_GID}" || addgroup --gid "${CI_BUILD_GID}"
"${CI_BUILD_GROUP}"
-getent passwd "${CI_BUILD_UID}" || adduser --gid "${CI_BUILD_GID}" --uid
"${CI_BUILD_UID}" \
+getent group "${CI_BUILD_GID}" || addgroup --force-badname --gid
"${CI_BUILD_GID}" "${CI_BUILD_GROUP}"
+getent passwd "${CI_BUILD_UID}" || adduser --force-badname --gid
"${CI_BUILD_GID}" --uid "${CI_BUILD_UID}" \
Review comment:
Do not change unrelevant files. Please file another PR for this change
if necessary.
##########
File path: src/relay/op/tensor/reduce.cc
##########
@@ -148,29 +158,51 @@ Array<Array<Layout>> ReduceInferCorrectLayout(const
Attrs& attrs,
tvm::Array<tvm::Integer> new_r_axes;
std::string new_layout_string = "";
int axis_index = 0;
- for (auto iter_var : new_in_layouts[0]->axes) {
+ for (auto iter_var : layout->axes) {
const auto& layout_axis = LayoutAxis::Get(iter_var);
const std::string& layout_dim = layout_axis.name();
if (old_r_dims.count(layout_dim)) {
new_r_axes.push_back(tvm::Integer(axis_index));
}
// Collect only the primal axis.
if (layout_axis.IsPrimal()) {
- new_layout_string += layout_dim;
+ if (!old_r_dims.count(layout_dim)) {
+ new_layout_string += layout_dim;
+ }
axis_index++;
}
}
// 3) Set the new axis and layout.
- ret = Layout(new_layout_string);
+ return std::make_tuple(params->keepdims ? stripe(layout) :
Layout(new_layout_string),
+ new_r_axes);
Review comment:
It seems to me that `infer` could be further improved, since the desired
layouts of two branches (keepdims or not) are totally different. Could we make
it like the following, so that you don't need to traverse the layout twice when
keepdims is true?
```c
std::string new_layout_string = "";
for (auto iter_var : layout->axes) {
// ...
if (layout_axis.IsPrimal()) {
if (params->keepdims || !old_r_dims.count(layout_dim)) {
new_layout_string += layout_dim;
}
axis_index++;
}
}
return std::make_tuple(new_layout_string, new_r_axes);
```
If the above solution works, we could further consider merging `infer` and
`stripe` to reduce duplicated logic:
```c
auto infer = [&](const Layout& layout, const bool keepdims) { /* ... */ };
// ...
// Origin: std::tie(inferred_out, new_r_axes) = infer(new_in_layouts[0]);
// Origin: inferred_in = stripe(new_in_layouts[0]);
std::tie(inferred_out, new_r_axes) = infer(new_in_layouts[0],
params->keepdims);
std::tie(inferred_in, std::ignore) = infer(new_in_layouts[0], false);
```
##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -2159,6 +2159,74 @@ Array<te::Tensor> SqueezeCompute(const Attrs& attrs,
const Array<te::Tensor>& in
return {topi::squeeze(inputs[0], param->axis)};
}
+Array<Array<Layout>> SqueezeInferCorrectLayout(const Attrs& attrs,
+ const Array<Layout>&
new_in_layouts,
+ const Array<Layout>&
old_in_layouts,
+ const Array<tvm::relay::Type>&
old_in_types) {
+ // NOTE: Discard "const" qualifier here.
+ SqueezeAttrs* params = const_cast<SqueezeAttrs*>(attrs.as<SqueezeAttrs>());
+
+ Layout inferred_input = new_in_layouts.defined() ? new_in_layouts[0] :
old_in_layouts[0];
+ Layout inferred_output = inferred_input;
+
+ ICHECK(old_in_types[0].as<TensorTypeNode>());
+ const auto& shape = old_in_types[0].as<TensorTypeNode>()->shape;
+
+ // axis to squeeze
+ Array<Integer> axis;
+ if (params->axis.defined()) {
+ axis = params->axis;
+ } else {
+ // if axes is None, squeeze all axes of dimension 1
+ for (size_t i = 0; i < shape.size(); i++) {
+ if (topi::detail::GetConstInt(shape[i]) == 1) {
+ axis.push_back(i);
+ }
+ }
+ }
+
+ if (axis.size() == 0) {
+ return Array<Array<Layout>>{{inferred_input}, {inferred_output}};
+ }
+
+ auto kept = [&](size_t i, Array<Integer> axis) {
Review comment:
- Add comments, or name this function better.
- I'd suggest moving this function down to get closer to its callee to
reduce the confusion.
##########
File path: src/relay/op/tensor/transform.cc
##########
@@ -2159,6 +2159,74 @@ Array<te::Tensor> SqueezeCompute(const Attrs& attrs,
const Array<te::Tensor>& in
return {topi::squeeze(inputs[0], param->axis)};
}
+Array<Array<Layout>> SqueezeInferCorrectLayout(const Attrs& attrs,
+ const Array<Layout>&
new_in_layouts,
+ const Array<Layout>&
old_in_layouts,
+ const Array<tvm::relay::Type>&
old_in_types) {
+ // NOTE: Discard "const" qualifier here.
+ SqueezeAttrs* params = const_cast<SqueezeAttrs*>(attrs.as<SqueezeAttrs>());
+
+ Layout inferred_input = new_in_layouts.defined() ? new_in_layouts[0] :
old_in_layouts[0];
+ Layout inferred_output = inferred_input;
+
+ ICHECK(old_in_types[0].as<TensorTypeNode>());
+ const auto& shape = old_in_types[0].as<TensorTypeNode>()->shape;
+
+ // axis to squeeze
+ Array<Integer> axis;
+ if (params->axis.defined()) {
+ axis = params->axis;
+ } else {
+ // if axes is None, squeeze all axes of dimension 1
+ for (size_t i = 0; i < shape.size(); i++) {
+ if (topi::detail::GetConstInt(shape[i]) == 1) {
+ axis.push_back(i);
+ }
+ }
+ }
+
+ if (axis.size() == 0) {
Review comment:
Add comment saying nothing for squeeze, or simply put this into the
above else branch to make it clearer.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]