Add annotation
parameter that stores meta data for programs which extend FTW
#65
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi FTW team. Thanks so much for your great product! It's helping us to improve the security of our application.
In this PR, I'd like to propose to add a new parameter,
annotation
in the YAML file. I would like to ask your opinion, or ask you to merge the PR if it looks good.Short description of what
annotation
isAs the updated YAMLFormat.md describes,
annotation
stores arbitrary data so that external programs which extend FTW, such as CRS_Tets.py of coreruleset, can embed their own data in the format they define.Why
annotation
is usefulThis allows external programs that extend FTW to control the test flow based on their program-specific settings such as skipping tests. Example usage is as follows.
Example: skip tests based on the
runtime
in the regression test of the coreruleset. In this example,runtime
is the concept of the CRS test, not of FTW. So, skipping a test case based on it can be handled only by the CRS side.Then, the test runner of the CRS can implement its own skip mechanism like this
Another possible use case.
Motivation
FTW can be extended by external programs which use FTW as a library, just as the regression test of the coreruleset does.
Such program often introduces a their own config system. CRS introduces a config to switch the ModSecurity implementations.
I guess it's common demand to control the test cases based on their own configuration. In my case as an example, I wanted to skip some test cases on Nginx and Envoy, on which I implemented my own WAF.
Design Discussion
Q: Why do you expect external applications to control the test flow, instead of implementing actual use cases such as skipping tests in FTW?
A: Because a user will want to fine-grained control based on the config or additional CLI parameters the external application provides. FTW side doesn't know any application-side concepts, such as whether the CLI parameter for the CRS test is
modsec2-apache
ormodsec3-nginx
. Thus,annotation
can be used to delegate the responsibility to external applications that extend the FTW, just as the FTW delegates the logging method to them. As a side note, it is also possible to introduce new concepts into FTW, such as "configchecker_obj" to reference application-specific settings, butannotation
is a simpler change and helps to keep FTW as a general framework.Possible future work
This PR adds
annotation
parameter only toTest
andRuleSet
. However, if you addannotation
toOutput
as well, a program which extends FTW can implement the conditional output test discussed in #19Reference
Here are two commits from my fork of CRS that implement a feature to skip tests on modsec3-nginx or modsec2-apache.