Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#3030 Add flatten Command for Objects to PPL #3267

Open
wants to merge 54 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
f462e2a
Add flatten command to ANTLR lexer and parser.
currantw Jan 17, 2025
69f0b1a
Skeleton implementation, tests, and documents with lots of TODOs.
currantw Jan 20, 2025
c1ac737
Initial implementation
currantw Jan 20, 2025
366e162
Fix typo
currantw Jan 24, 2025
0cbd8d4
Initial implementation
currantw Jan 27, 2025
26e9443
Update/fix tests.
currantw Jan 27, 2025
237b69e
Update integration tests to align with doc tests.
currantw Jan 31, 2025
3981c38
Minor cleanup.
currantw Jan 28, 2025
2ca7194
Add `ExplainIT` tests for flatten
currantw Jan 28, 2025
9ddfc4a
Revert recursive flattening, add documentation, more test updates
currantw Jan 28, 2025
c54c1f5
One more doctest fix
currantw Jan 28, 2025
8993e11
Fix `ExplainIT` error
currantw Jan 28, 2025
288add2
Add additional test case to `flatten.rst`
currantw Jan 28, 2025
eca3154
Fix `FlattenCommandIT`, add additional test case.
currantw Jan 28, 2025
c89a302
Fix `PhysicalPlanNodeVisitor` test coverage.
currantw Jan 28, 2025
9b2e9ce
Review: use `StringUtils.format` instead of `String.format`.
currantw Jan 29, 2025
82c8ccb
Fix `LogicalFlattenTest`.
currantw Jan 29, 2025
b7d8794
Simplify algorithm for `Analyzer`.
currantw Jan 29, 2025
ca013ef
Update to support flattening nested structs.
currantw Jan 30, 2025
7920bd8
Fix unrelated bug in `IPFUnctionsTest`.
currantw Jan 30, 2025
9d6459f
Update `IPFUnctionsTest` to anchor at start.
currantw Jan 30, 2025
6d040eb
Minor cleanup.
currantw Jan 30, 2025
43c0902
Fix doctest formatting.
currantw Jan 30, 2025
40362bf
Address minor review comments.
currantw Jan 30, 2025
b0a6710
Fix doc tests.
currantw Jan 31, 2025
be26660
Update integratation tests to align with doc tests.
currantw Jan 31, 2025
b3e4401
Review - minor documentation updates.
currantw Jan 31, 2025
4099f10
Remove double periods
currantw Feb 1, 2025
b96cefa
Add comment on `Map.equals`.
currantw Feb 1, 2025
72d98ed
Remove unnecessary error checks.
currantw Feb 1, 2025
4632c03
Update to maintain existing field.
currantw Feb 3, 2025
d755208
Update for test coverage
currantw Feb 3, 2025
09563ab
Simplify `Analyzer` implementation
currantw Feb 3, 2025
1d391ce
Rename `cities` dataset to `flatten`
currantw Feb 5, 2025
ef750f4
SpotlessApply
currantw Feb 5, 2025
14e005e
Minor doc cleanup.
currantw Feb 5, 2025
73885a7
Fix failing IT
currantw Feb 5, 2025
4fbd320
Update incorrect documentation in `Analyzer.visitFlatten`.
currantw Feb 5, 2025
337fb01
Update integ and doc tests to add another example of original field b…
currantw Feb 6, 2025
abe5c6c
Review comment - move example to `Analyzer.visitFlatten` Javadoc.
currantw Feb 6, 2025
a0022f4
Review comment - update `Analyzer.visitFlatten` Javadoc to specify th…
currantw Feb 6, 2025
df99d37
Review comment - remove unnecessary @Getter
currantw Feb 6, 2025
6883214
Review comments - add `testStructNestedDeep` test case
currantw Feb 6, 2025
94a4c8a
Review comments - add `testStructNestedDeep` test case
currantw Feb 6, 2025
26563c9
Woops! Fix failing test.
currantw Feb 6, 2025
bfb51a5
Review comments - extract `PathUtils` constants
currantw Feb 6, 2025
22eccaf
Review comments - update `Analyzer` to not use `Optional`.
currantw Feb 7, 2025
dcd241a
Bunch of additional review comments.
currantw Feb 7, 2025
befe55b
Spotless
currantw Feb 7, 2025
eb93cb1
Spotless
currantw Feb 7, 2025
1f05e85
Additional review comments, including move constants to `ExprValueUti…
currantw Feb 7, 2025
db96c51
Review comments - update tests for exception msg
currantw Feb 7, 2025
c1666ee
Review comments - simplify `FlattenOperator.flattenExprValueAtPath`.
currantw Feb 7, 2025
6e176a3
Change braces in documentation.
currantw Feb 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
163 changes: 132 additions & 31 deletions core/src/main/java/org/opensearch/sql/analysis/Analyzer.java
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@
package org.opensearch.sql.analysis;

import static org.opensearch.sql.analysis.DataSourceSchemaIdentifierNameResolver.DEFAULT_DATASOURCE_NAME;
import static org.opensearch.sql.analysis.symbol.Namespace.FIELD_NAME;
import static org.opensearch.sql.analysis.symbol.Namespace.HIDDEN_FIELD_NAME;
import static org.opensearch.sql.analysis.symbol.Namespace.INDEX_NAME;
import static org.opensearch.sql.ast.tree.Sort.NullOrder.NULL_FIRST;
import static org.opensearch.sql.ast.tree.Sort.NullOrder.NULL_LAST;
import static org.opensearch.sql.ast.tree.Sort.SortOrder.ASC;
Expand All @@ -26,21 +29,21 @@
import com.google.common.collect.ImmutableSet;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.stream.Collectors;
import org.apache.commons.lang3.tuple.ImmutablePair;
import org.apache.commons.lang3.tuple.Pair;
import org.opensearch.sql.DataSourceSchemaName;
import org.opensearch.sql.analysis.symbol.Namespace;
import org.opensearch.sql.analysis.symbol.Symbol;
import org.opensearch.sql.ast.AbstractNodeVisitor;
import org.opensearch.sql.ast.expression.Argument;
import org.opensearch.sql.ast.expression.Field;
import org.opensearch.sql.ast.expression.Let;
import org.opensearch.sql.ast.expression.Literal;
import org.opensearch.sql.ast.expression.Map;
import org.opensearch.sql.ast.expression.ParseMethod;
import org.opensearch.sql.ast.expression.QualifiedName;
import org.opensearch.sql.ast.expression.UnresolvedExpression;
Expand All @@ -52,6 +55,7 @@
import org.opensearch.sql.ast.tree.FetchCursor;
import org.opensearch.sql.ast.tree.FillNull;
import org.opensearch.sql.ast.tree.Filter;
import org.opensearch.sql.ast.tree.Flatten;
import org.opensearch.sql.ast.tree.Head;
import org.opensearch.sql.ast.tree.Kmeans;
import org.opensearch.sql.ast.tree.Limit;
Expand All @@ -70,8 +74,11 @@
import org.opensearch.sql.ast.tree.UnresolvedPlan;
import org.opensearch.sql.ast.tree.Values;
import org.opensearch.sql.common.antlr.SyntaxCheckException;
import org.opensearch.sql.common.utils.StringUtils;
import org.opensearch.sql.data.model.ExprMissingValue;
import org.opensearch.sql.data.model.ExprValueUtils;
import org.opensearch.sql.data.type.ExprCoreType;
import org.opensearch.sql.data.type.ExprType;
import org.opensearch.sql.datasource.DataSourceService;
import org.opensearch.sql.exception.SemanticCheckException;
import org.opensearch.sql.expression.DSL;
Expand All @@ -94,6 +101,7 @@
import org.opensearch.sql.planner.logical.LogicalEval;
import org.opensearch.sql.planner.logical.LogicalFetchCursor;
import org.opensearch.sql.planner.logical.LogicalFilter;
import org.opensearch.sql.planner.logical.LogicalFlatten;
import org.opensearch.sql.planner.logical.LogicalLimit;
import org.opensearch.sql.planner.logical.LogicalML;
import org.opensearch.sql.planner.logical.LogicalMLCommons;
Expand Down Expand Up @@ -165,16 +173,15 @@ public LogicalPlan visitRelation(Relation node, AnalysisContext context) {
dataSourceSchemaIdentifierNameResolver.getSchemaName()),
dataSourceSchemaIdentifierNameResolver.getIdentifierName());
}
table.getFieldTypes().forEach((k, v) -> curEnv.define(new Symbol(Namespace.FIELD_NAME, k), v));
table.getFieldTypes().forEach((k, v) -> curEnv.define(new Symbol(FIELD_NAME, k), v));
table
.getReservedFieldTypes()
.forEach((k, v) -> curEnv.define(new Symbol(Namespace.HIDDEN_FIELD_NAME, k), v));
.forEach((k, v) -> curEnv.define(new Symbol(HIDDEN_FIELD_NAME, k), v));

// Put index name or its alias in index namespace on type environment so qualifier
// can be removed when analyzing qualified name. The value (expr type) here doesn't matter.
curEnv.define(
new Symbol(Namespace.INDEX_NAME, (node.getAlias() == null) ? tableName : node.getAlias()),
STRUCT);
new Symbol(INDEX_NAME, (node.getAlias() == null) ? tableName : node.getAlias()), STRUCT);

return new LogicalRelation(tableName, table);
}
Expand All @@ -187,7 +194,7 @@ public LogicalPlan visitRelationSubquery(RelationSubquery node, AnalysisContext

// Put subquery alias in index namespace so the qualifier can be removed
// when analyzing qualified name in the subquery layer
curEnv.define(new Symbol(Namespace.INDEX_NAME, node.getAliasAsTableName()), STRUCT);
curEnv.define(new Symbol(INDEX_NAME, node.getAliasAsTableName()), STRUCT);
return subquery;
}

Expand Down Expand Up @@ -219,14 +226,12 @@ public LogicalPlan visitTableFunction(TableFunction node, AnalysisContext contex
context.push();
TypeEnvironment curEnv = context.peek();
Table table = tableFunctionImplementation.applyArguments();
table.getFieldTypes().forEach((k, v) -> curEnv.define(new Symbol(Namespace.FIELD_NAME, k), v));
table.getFieldTypes().forEach((k, v) -> curEnv.define(new Symbol(FIELD_NAME, k), v));
table
.getReservedFieldTypes()
.forEach((k, v) -> curEnv.define(new Symbol(Namespace.HIDDEN_FIELD_NAME, k), v));
.forEach((k, v) -> curEnv.define(new Symbol(HIDDEN_FIELD_NAME, k), v));
curEnv.define(
new Symbol(
Namespace.INDEX_NAME, dataSourceSchemaIdentifierNameResolver.getIdentifierName()),
STRUCT);
new Symbol(INDEX_NAME, dataSourceSchemaIdentifierNameResolver.getIdentifierName()), STRUCT);
return new LogicalRelation(
dataSourceSchemaIdentifierNameResolver.getIdentifierName(),
tableFunctionImplementation.applyArguments());
Expand Down Expand Up @@ -277,7 +282,7 @@ public LogicalPlan visitRename(Rename node, AnalysisContext context) {
LogicalPlan child = node.getChild().get(0).accept(this, context);
ImmutableMap.Builder<ReferenceExpression, ReferenceExpression> renameMapBuilder =
new ImmutableMap.Builder<>();
for (Map renameMap : node.getRenameList()) {
for (org.opensearch.sql.ast.expression.Map renameMap : node.getRenameList()) {
currantw marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why getRenameList returns a map? Shouldn't we rename it?

Expression origin = expressionAnalyzer.analyze(renameMap.getOrigin(), context);
// We should define the new target field in the context instead of analyze it.
if (renameMap.getTarget() instanceof Field) {
Expand Down Expand Up @@ -328,11 +333,9 @@ public LogicalPlan visitAggregation(Aggregation node, AnalysisContext context) {
TypeEnvironment newEnv = context.peek();
aggregators.forEach(
aggregator ->
newEnv.define(
new Symbol(Namespace.FIELD_NAME, aggregator.getName()), aggregator.type()));
newEnv.define(new Symbol(FIELD_NAME, aggregator.getName()), aggregator.type()));
groupBys.forEach(
group ->
newEnv.define(new Symbol(Namespace.FIELD_NAME, group.getNameOrAlias()), group.type()));
group -> newEnv.define(new Symbol(FIELD_NAME, group.getNameOrAlias()), group.type()));
return new LogicalAggregation(child, aggregators, groupBys);
}

Expand All @@ -357,9 +360,8 @@ public LogicalPlan visitRareTopN(RareTopN node, AnalysisContext context) {
context.push();
TypeEnvironment newEnv = context.peek();
groupBys.forEach(
group -> newEnv.define(new Symbol(Namespace.FIELD_NAME, group.toString()), group.type()));
fields.forEach(
field -> newEnv.define(new Symbol(Namespace.FIELD_NAME, field.toString()), field.type()));
group -> newEnv.define(new Symbol(FIELD_NAME, group.toString()), group.type()));
fields.forEach(field -> newEnv.define(new Symbol(FIELD_NAME, field.toString()), field.type()));

List<Argument> options = node.getNoOfResults();
Integer noOfResults = (Integer) options.get(0).getValue().getValue();
Expand Down Expand Up @@ -425,8 +427,7 @@ public LogicalPlan visitProject(Project node, AnalysisContext context) {
context.push();
TypeEnvironment newEnv = context.peek();
namedExpressions.forEach(
expr ->
newEnv.define(new Symbol(Namespace.FIELD_NAME, expr.getNameOrAlias()), expr.type()));
expr -> newEnv.define(new Symbol(FIELD_NAME, expr.getNameOrAlias()), expr.type()));
List<NamedExpression> namedParseExpressions = context.getNamedParseExpressions();
return new LogicalProject(child, namedExpressions, namedParseExpressions);
}
Expand All @@ -448,6 +449,107 @@ public LogicalPlan visitEval(Eval node, AnalysisContext context) {
return new LogicalEval(child, expressionsBuilder.build());
}

/**
* Builds and returns a {@link org.opensearch.sql.planner.logical.LogicalFlatten} corresponding to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: You can add import and

Suggested change
* Builds and returns a {@link org.opensearch.sql.planner.logical.LogicalFlatten} corresponding to
* Builds and returns a {@link LogicalFlatten} corresponding to

(and rerun spotlesssss)

* the given flatten node, and adds the new fields to the current type environment.
*
* <p><b>Example</b>
*
* <p>Input Data:
*
* <pre>
* {
* struct: {
* integer: 0,
* nested_struct: { string: "value" }
* }
* }
* </pre>
*
* Query 1: <code>flatten struct</code>
*
* <pre>
* {
* struct: {
* integer: 0,
* nested_struct: { string: "value" }
* },
Comment on lines +473 to +476
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't removing flattened struct? Why?

* integer: 0,
* nested_struct: { string: "value" }
* }
* </pre>
*
* Query 2: <code>flatten struct.nested_struct</code>
*
* <pre>
* {
* struct: {
* integer: 0,
* nested_struct: { string: "value" },
* string: "value"
* }
* }
* </pre>
*/
@Override
public LogicalPlan visitFlatten(Flatten node, AnalysisContext context) {
LogicalPlan child = node.getChild().getFirst().accept(this, context);

ReferenceExpression fieldExpr =
(ReferenceExpression) expressionAnalyzer.analyze(node.getField(), context);
String fieldName = fieldExpr.getAttr();

// [A] Determine fields to add
// ---------------------------

// Iterate over all the fields defined in the type environment. Find all those that are
// descended from field that is being flattened, and determine the new paths to add. When
// determining the new paths, we need to preserve the portion of the path corresponding to the
// flattened field's parent, if one exists, in order to support flattening nested structs.

TypeEnvironment env = context.peek();
Map<String, ExprType> fieldsMap = env.lookupAllTupleFields(FIELD_NAME);

final String fieldParentPathPrefix =
fieldName.contains(ExprValueUtils.QUALIFIED_NAME_SEPARATOR)
? fieldName.substring(0, fieldName.lastIndexOf(ExprValueUtils.QUALIFIED_NAME_SEPARATOR))
+ ExprValueUtils.QUALIFIED_NAME_SEPARATOR
: "";

// Get entries for paths that are descended from the flattened field.
final String fieldDescendantPathPrefix = fieldName + ExprValueUtils.QUALIFIED_NAME_SEPARATOR;
List<Map.Entry<String, ExprType>> fieldDescendantEntries =
fieldsMap.entrySet().stream()
.filter(e -> e.getKey().startsWith(fieldDescendantPathPrefix))
.toList();

// Get fields to add from descendant entries.
Map<String, ExprType> addFieldsMap = new HashMap<>();
for (Map.Entry<String, ExprType> entry : fieldDescendantEntries) {
String newPath =
fieldParentPathPrefix + entry.getKey().substring(fieldDescendantPathPrefix.length());
addFieldsMap.put(newPath, entry.getValue());
}

// [B] Add new fields to type environment
// --------------------------------------

// Verify that new fields do not overwrite an existing field.
List<String> duplicateFieldNames =
addFieldsMap.keySet().stream().filter(fieldsMap::containsKey).toList();

if (!duplicateFieldNames.isEmpty()) {
throw new SemanticCheckException(
StringUtils.format(
"Flatten command cannot overwrite fields: %s",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to confirm that you have IT to cover that case

String.join(", ", duplicateFieldNames)));
}

addFieldsMap.forEach((name, type) -> env.define(DSL.ref(name, type)));

return new LogicalFlatten(child, fieldExpr);
}

/** Build {@link ParseExpression} to context and skip to child nodes. */
@Override
public LogicalPlan visitParse(Parse node, AnalysisContext context) {
Expand All @@ -465,7 +567,7 @@ public LogicalPlan visitParse(Parse node, AnalysisContext context) {
ParseExpression expr =
ParseUtils.createParseExpression(
parseMethod, sourceField, patternExpression, DSL.literal(group));
curEnv.define(new Symbol(Namespace.FIELD_NAME, group), expr.type());
curEnv.define(new Symbol(FIELD_NAME, group), expr.type());
context.getNamedParseExpressions().add(new NamedExpression(group, expr));
});
return child;
Expand Down Expand Up @@ -524,7 +626,7 @@ public LogicalPlan visitKmeans(Kmeans node, AnalysisContext context) {
java.util.Map<String, Literal> options = node.getArguments();

TypeEnvironment currentEnv = context.peek();
currentEnv.define(new Symbol(Namespace.FIELD_NAME, "ClusterID"), ExprCoreType.INTEGER);
currentEnv.define(new Symbol(FIELD_NAME, "ClusterID"), ExprCoreType.INTEGER);

return new LogicalMLCommons(child, "kmeans", options);
}
Expand All @@ -537,13 +639,13 @@ public LogicalPlan visitAD(AD node, AnalysisContext context) {

TypeEnvironment currentEnv = context.peek();

currentEnv.define(new Symbol(Namespace.FIELD_NAME, RCF_SCORE), ExprCoreType.DOUBLE);
currentEnv.define(new Symbol(FIELD_NAME, RCF_SCORE), ExprCoreType.DOUBLE);
if (Objects.isNull(node.getArguments().get(TIME_FIELD))) {
currentEnv.define(new Symbol(Namespace.FIELD_NAME, RCF_ANOMALOUS), ExprCoreType.BOOLEAN);
currentEnv.define(new Symbol(FIELD_NAME, RCF_ANOMALOUS), ExprCoreType.BOOLEAN);
} else {
currentEnv.define(new Symbol(Namespace.FIELD_NAME, RCF_ANOMALY_GRADE), ExprCoreType.DOUBLE);
currentEnv.define(new Symbol(FIELD_NAME, RCF_ANOMALY_GRADE), ExprCoreType.DOUBLE);
currentEnv.define(
new Symbol(Namespace.FIELD_NAME, (String) node.getArguments().get(TIME_FIELD).getValue()),
new Symbol(FIELD_NAME, (String) node.getArguments().get(TIME_FIELD).getValue()),
ExprCoreType.TIMESTAMP);
}
return new LogicalAD(child, options);
Expand Down Expand Up @@ -578,8 +680,7 @@ public LogicalPlan visitML(ML node, AnalysisContext context) {
LogicalPlan child = node.getChild().get(0).accept(this, context);
TypeEnvironment currentEnv = context.peek();
node.getOutputSchema(currentEnv).entrySet().stream()
.forEach(
v -> currentEnv.define(new Symbol(Namespace.FIELD_NAME, v.getKey()), v.getValue()));
.forEach(v -> currentEnv.define(new Symbol(FIELD_NAME, v.getKey()), v.getValue()));

return new LogicalML(child, node.getArguments());
}
Expand Down Expand Up @@ -620,7 +721,7 @@ public LogicalPlan visitTrendline(Trendline node, AnalysisContext context) {
resolvedField.type().typeName()));
}
}
currEnv.define(new Symbol(Namespace.FIELD_NAME, computation.getAlias()), averageType);
currEnv.define(new Symbol(FIELD_NAME, computation.getAlias()), averageType);
computationsAndTypes.add(Pair.of(computation, averageType));
});

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
package org.opensearch.sql.analysis;

import java.util.List;
import org.opensearch.sql.data.model.ExprValueUtils;
import org.opensearch.sql.datasource.DataSourceService;

public class DataSourceSchemaIdentifierNameResolver {
Expand All @@ -21,8 +22,6 @@ public class DataSourceSchemaIdentifierNameResolver {
private final String identifierName;
private final DataSourceService dataSourceService;

private static final String DOT = ".";
currantw marked this conversation as resolved.
Show resolved Hide resolved

/**
* Data model for capturing dataSourceName, schema and identifier from fully qualifiedName. In the
* current state, it is used to capture DataSourceSchemaTable name and DataSourceSchemaFunction in
Expand All @@ -35,7 +34,7 @@ public DataSourceSchemaIdentifierNameResolver(
DataSourceService dataSourceService, List<String> parts) {
this.dataSourceService = dataSourceService;
List<String> remainingParts = captureSchemaName(captureDataSourceName(parts));
identifierName = String.join(DOT, remainingParts);
identifierName = String.join(ExprValueUtils.QUALIFIED_NAME_SEPARATOR, remainingParts);
}

public String getIdentifierName() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@
import org.opensearch.sql.ast.tree.FetchCursor;
import org.opensearch.sql.ast.tree.FillNull;
import org.opensearch.sql.ast.tree.Filter;
import org.opensearch.sql.ast.tree.Flatten;
import org.opensearch.sql.ast.tree.Head;
import org.opensearch.sql.ast.tree.Kmeans;
import org.opensearch.sql.ast.tree.Limit;
Expand Down Expand Up @@ -107,6 +108,10 @@ public T visitTableFunction(TableFunction node, C context) {
return visitChildren(node, context);
}

public T visitFlatten(Flatten node, C context) {
return visitChildren(node, context);
}

public T visitFilter(Filter node, C context) {
return visitChildren(node, context);
}
Expand Down
5 changes: 5 additions & 0 deletions core/src/main/java/org/opensearch/sql/ast/dsl/AstDSL.java
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@
import org.opensearch.sql.ast.tree.Eval;
import org.opensearch.sql.ast.tree.FillNull;
import org.opensearch.sql.ast.tree.Filter;
import org.opensearch.sql.ast.tree.Flatten;
import org.opensearch.sql.ast.tree.Head;
import org.opensearch.sql.ast.tree.Limit;
import org.opensearch.sql.ast.tree.Parse;
Expand Down Expand Up @@ -104,6 +105,10 @@ public static Eval eval(UnresolvedPlan input, Let... projectList) {
return new Eval(Arrays.asList(projectList)).attach(input);
}

public Flatten flatten(UnresolvedPlan input, Field field) {
return new Flatten(field).attach(input);
}

public static UnresolvedPlan projectWithArg(
UnresolvedPlan input, List<Argument> argList, UnresolvedExpression... projectList) {
return new Project(Arrays.asList(projectList), argList).attach(input);
Expand Down
2 changes: 0 additions & 2 deletions core/src/main/java/org/opensearch/sql/ast/tree/Eval.java
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,12 @@
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import lombok.Setter;
currantw marked this conversation as resolved.
Show resolved Hide resolved
import lombok.ToString;
import org.opensearch.sql.ast.AbstractNodeVisitor;
import org.opensearch.sql.ast.expression.Let;

/** AST node represent Eval operation. */
@Getter
@Setter
currantw marked this conversation as resolved.
Show resolved Hide resolved
@ToString
@EqualsAndHashCode(callSuper = false)
@RequiredArgsConstructor
Expand Down
Loading
Loading