diff --git a/DESCRIPTION b/DESCRIPTION index 74f1bb56..26c8b168 100644 --- a/DESCRIPTION +++ b/DESCRIPTION @@ -2,7 +2,7 @@ Package: DatabaseConnector Type: Package Title: Connecting to Various Database Platforms Version: 6.2.3 -Date: 2023-06-23 +Date: 2023-06-28 Authors@R: c( person("Martijn", "Schuemie", email = "schuemie@ohdsi.org", role = c("aut", "cre")), person("Marc", "Suchard", role = c("aut")), diff --git a/cran-comments.md b/cran-comments.md index 2a5c1982..7194493b 100644 --- a/cran-comments.md +++ b/cran-comments.md @@ -1,12 +1,12 @@ -This update includes 4 changes and 1 bugfix (see NEWS.md). +This update includes 1 changes and 2 bugfixes (see NEWS.md). --- ## Test environments -* Ubuntu 20.04, R 4.3.0 +* Ubuntu 20.04, R 4.3.1 * Microsoft Windows Server 2016, R 4.2.3 -* MacOS, 4.3.0 -* Windows 10, R 4.3.0 +* MacOS, 4.3.1 +* Windows 10, R 4.3.1 ## R CMD check results diff --git a/docs/404.html b/docs/404.html index 109ab42d..8835475a 100644 --- a/docs/404.html +++ b/docs/404.html @@ -32,7 +32,7 @@
diff --git a/docs/articles/Connecting.html b/docs/articles/Connecting.html index 24e4b481..74fd9642 100644 --- a/docs/articles/Connecting.html +++ b/docs/articles/Connecting.html @@ -33,7 +33,7 @@ @@ -88,9 +88,10 @@vignettes/Connecting.Rmd
Connecting.Rmd
This vignette describes how you can use the DatabaseConnector
package to connect to a database.
This vignette describes how you can use the
+DatabaseConnector
package to connect to a database.
DatabaseConnector
supports these database platforms:
Before DatabaseConnector
can be used to connect to a database, the drivers for your platform need to be downloaded to a location in the local file system, which we’ll refer to as the JAR folder.
Before DatabaseConnector
can be used to connect to a
+database, the drivers for your platform need to be downloaded to a
+location in the local file system, which we’ll refer to as the JAR
+folder.
The JAR folder is just a folder in the local file system where the database drivers are stored. It is highly recommended to use the DATABASECONNECTOR_JAR_FOLDER
environmental variable to point to this folder, which you can for example set using:
The JAR folder is just a folder in the local file system where the
+database drivers are stored. It is highly recommended to use the
+DATABASECONNECTOR_JAR_FOLDER
environmental variable to
+point to this folder, which you can for example set using:
Sys.setenv("DATABASECONNECTOR_JAR_FOLDER" = "c:/temp/jdbcDrivers")
Even better would be to add this entry to your .Renviron
file:
Even better would be to add this entry to your .Renviron
+file:
DATABASECONNECTOR_JAR_FOLDER = 'c:/temp/jdbcDrivers'
-That way, the environmental variable will be automatically set whenever you start R. A convenient way to edit your .Renviron
file is by using usethis
:
That way, the environmental variable will be automatically set
+whenever you start R. A convenient way to edit your
+.Renviron
file is by using usethis
:
install.packages("usethis")
usethis::edit_r_environ()
If you don’t use the DATABASECONNECTOR_JAR_FOLDER
environmental variable, you will need to provide the pathToDriver
argument every time you call the downloadJdbcDrivers
, connect
, dbConnect
, or createConnectionDetails
functions.
If you don’t use the DATABASECONNECTOR_JAR_FOLDER
+environmental variable, you will need to provide the
+pathToDriver
argument every time you call the
+downloadJdbcDrivers
, connect
,
+dbConnect
, or createConnectionDetails
+functions.
For your convenience these JDBC drivers are hosted on the OHDSI GitHub pages, and can be downloaded using the downloadJdbcDrivers
function. You’ll first need to specify the JAR folder as described in the previous section, for example using
For your convenience these JDBC drivers are hosted on the OHDSI
+GitHub pages, and can be downloaded using the
+downloadJdbcDrivers
function. You’ll first need to specify
+the JAR folder as described in the previous section, for example
+using
Sys.setenv("DATABASECONNECTOR_JAR_FOLDER" = "c:/temp/jdbcDrivers")
And next download the driver. For example, for PostgreSQL:
downloadJdbcDrivers("postgresql")
## DatabaseConnector JDBC drivers downloaded to 'c:/temp/jdbcDrivers'.
-Note that if we hadn’t specified the DATABASECONNECTOR_JAR_FOLDER
environmental variable, we would have to specify the pathToDriver
argument when calling downloadJdbcDrivers
.
Note that if we hadn’t specified the
+DATABASECONNECTOR_JAR_FOLDER
environmental variable, we
+would have to specify the pathToDriver
argument when
+calling downloadJdbcDrivers
.
Because of licensing reasons the drivers for BigQuery, Netezza and Impala are not included but must be obtained by the user. see these instructions on how to download these drivers, which you can also see by typing ?jdbcDrivers
.
Because of licensing reasons the drivers for BigQuery, Netezza and
+Impala are not included but must be obtained by the user. see these
+instructions on how to download these drivers, which you can also
+see by typing ?jdbcDrivers
.
For SQLite we actually don’t use a JDBC driver. Instead, we use the RSQLite package, which can be installed using
+For SQLite we actually don’t use a JDBC driver. Instead, we use the +RSQLite package, which can be installed using
install.packages("RSQLite")
To connect to a database a number of details need to be specified, such as the database platform, the location of the server, the user name, password, and path to the driver. We can call the connect()
function and specify these details directly:
To connect to a database a number of details need to be specified,
+such as the database platform, the location of the server, the user
+name, password, and path to the driver. We can call the
+connect()
function and specify these details directly:
conn <- connect(dbms = "postgresql",
server = "localhost/postgres",
user = "joe",
password = "secret")
## Connecting using PostgreSQL driver
-See this webpage or type ?connect
for information on which details are required for each platform. Note that we did not need to specify the pathToDriver
argument because we previously already set the DATABASECONNECTOR_JAR_FOLDER
environmental variable.
See this
+webpage or type ?connect
for information on which
+details are required for each platform. Note that we did not need to
+specify the pathToDriver
argument because we previously
+already set the DATABASECONNECTOR_JAR_FOLDER
environmental
+variable.
Don’t forget to close any connection afterwards:
disconnect(conn)
Instead of providing the server name, it is also possible to provide the JDBC connection string if this is more convenient:
+Instead of providing the server name, it is also possible to provide +the JDBC connection string if this is more convenient:
conn <- connect(dbms = "postgresql",
connectionString = "jdbc:postgresql://localhost:5432/postgres",
user = "joe",
password = "secret")
## Connecting using PostgreSQL driver
-Sometimes we may want to first specify the connection details, and defer connecting until later. This may be convenient for example when the connection is established inside a function, and the details need to be passed as an argument. We can use the createConnectionDetails
function for this purpose:
Sometimes we may want to first specify the connection details, and
+defer connecting until later. This may be convenient for example when
+the connection is established inside a function, and the details need to
+be passed as an argument. We can use the
+createConnectionDetails
function for this purpose:
details <- createConnectionDetails(dbms = "postgresql",
server = "localhost/postgres",
@@ -193,12 +235,25 @@ Creating a connection
Using Windows Authentication for SQL Server
-In some organizations using Microsoft SQL Server and Windows, it is possible to use Windows Authentication to connect to the server, meaning you won’t have to provide a user name and password, since your Windows credentials will be used. This will require downloading the SQL Server authentication DLL file, and placing it somewhere on your system path. If you don’t have rights to add files to a place on your system path, you can place it anywhere, and set the PATH_TO_AUTH_DLL
environmental variable, either using the Sys.setenv()
, or by adding it to your .Renviron
file. See this webpage or type ?connect
for details on where to get the DLL (and what specific version).
+In some organizations using Microsoft SQL Server and Windows, it is
+possible to use Windows Authentication to connect to the server, meaning
+you won’t have to provide a user name and password, since your Windows
+credentials will be used. This will require downloading the SQL Server
+authentication DLL file, and placing it somewhere on your system path.
+If you don’t have rights to add files to a place on your system path,
+you can place it anywhere, and set the PATH_TO_AUTH_DLL
+environmental variable, either using the Sys.setenv()
, or
+by adding it to your .Renviron
file. See this
+webpage or type ?connect
for details on where to get
+the DLL (and what specific version).
DatabaseConnector
also supports SQLite through the RSQLite package, mainly for testing and demonstration purposes. Provide the path to the SQLite file as the server
argument when connecting. If no file exists it will be created:
DatabaseConnector
also supports SQLite through the RSQLite
+package, mainly for testing and demonstration purposes. Provide the
+path to the SQLite file as the server
argument when
+connecting. If no file exists it will be created:
## Connecting using SQLite driver
@@ -207,7 +262,7 @@ ## Inserting data took 0.124 secs
+## Inserting data took 0.0811 secs
querySql(conn, "SELECT COUNT(*) FROM main.cars;")
## COUNT(*)
diff --git a/docs/articles/DbiAndDbplyr.html b/docs/articles/DbiAndDbplyr.html
index 3115027b..e778945d 100644
--- a/docs/articles/DbiAndDbplyr.html
+++ b/docs/articles/DbiAndDbplyr.html
@@ -33,7 +33,7 @@
vignettes/DbiAndDbplyr.Rmd
DbiAndDbplyr.Rmd
This vignette describes how to use the DatabaseConnector
package through the DBI
and dbplyr
interfaces. It assumes you already know how to create a connection as described in the ‘Connecting to a database’ vignette.
All functions of the DatabaseConnector
DBI
interface apply SQL translation, thus making it an interface to a virtual database platform speaking OHDSISql as defined in SqlRender
.
This vignette describes how to use the DatabaseConnector
+package through the DBI
and dbplyr
interfaces.
+It assumes you already know how to create a connection as described in
+the ‘Connecting to a database’ vignette.
All functions of the DatabaseConnector
DBI
+interface apply SQL translation, thus making it an interface to a
+virtual database platform speaking OHDSISql as defined in
+SqlRender
.
We can use the dbConnect()
function, which is equivalent to the connect()
function:
We can use the dbConnect()
function, which is equivalent
+to the connect()
function:
connection <- dbConnect(
DatabaseConnectorDriver(),
@@ -125,7 +134,11 @@ Connecting
Querying
-Querying and executing SQL can be done through the usual DBI
functions. SQL statements are assumed to be written in ‘OhdsiSql’, a subset of SQL Server SQL (see the SqlRender
package for details), and are automatically translated to the appropriate SQL dialect. For example:
+Querying and executing SQL can be done through the usual
+DBI
functions. SQL statements are assumed to be written in
+‘OhdsiSql’, a subset of SQL Server SQL (see the SqlRender
+package for details), and are automatically translated to the
+appropriate SQL dialect. For example:
dbGetQuery(connection, "SELECT TOP 3 * FROM cdmv5.person")
## person_id gender_concept_id year_of_birth
@@ -150,7 +163,10 @@ Querying
Using dbplyr
-We can create a table based on a DatabaseConnector
connection. The inDatabaseSchema()
function allows us to use standard databaseSchema
notation promoted by SqlRender:
+We can create a table based on a DatabaseConnector
+connection. The inDatabaseSchema()
function allows us to
+use standard databaseSchema
notation promoted by
+SqlRender:
library(dpylr)
person <- tbl(connection, inDatabaseSchema("cdmv5", "person"))
@@ -169,7 +185,12 @@ Using dbplyr
Date functions
-The dbplyr
package does not support date functions, but DatabaseConnector
provides the dateDiff()
, dateAdd()
, eoMonth()
, dateFromParts()
, year()
, month()
, and day()
functions that will correctly translate to various data platforms:
+The dbplyr
package does not support date functions, but
+DatabaseConnector
provides the dateDiff()
,
+dateAdd()
, eoMonth()
,
+dateFromParts()
, year()
, month()
,
+and day()
functions that will correctly translate to
+various data platforms:
observationPeriod <- tbl(connection, inDatabaseSchema("cdmv5", "observation_period"))
observationPeriod %>%
@@ -183,17 +204,33 @@ Date functions
Allowed table and field names in dbplyr
-Because of the many idiosyncrasies in how different dataplatforms store and transform table and field names, it is currently not possible to use any names that would require quotes. So for example the names person
, person_id
, and observation_period
are fine, but Person ID
and Obs. Period
are not. In general, it is highly recommend to use lower case snake_case for database table and field names.
+Because of the many idiosyncrasies in how different dataplatforms
+store and transform table and field names, it is currently not possible
+to use any names that would require quotes. So for example the names
+person
, person_id
, and
+observation_period
are fine, but Person ID
and
+Obs. Period
are not. In general, it is highly recommend to
+use lower case snake_case for database table and field
+names.
Temp tables
-The DBI
interface uses temp table emulation on those platforms that do not support real temp tables. This does require that for those platforms the user specify a tempEmulationSchema
, preferably using
+The DBI
interface uses temp table emulation on those
+platforms that do not support real temp tables. This does require that
+for those platforms the user specify a tempEmulationSchema
,
+preferably using
option(sqlRenderTempEmulationSchema = "a_schema")
-Where "a_schema"
refers to a schema where the user has write access. If we know we will need temp tables, we can use the assertTempEmulationSchemaSet()
to verify this option has been set. This function will throw an error if it is not set, but only if the provided dbms is a platform that requires temp table emulation.
-In OHDSISql
, temp tables are referred to using a ‘#’ prefix. For example:
+Where "a_schema"
refers to a schema where the user has
+write access. If we know we will need temp tables, we can use the
+assertTempEmulationSchemaSet()
to verify this option has
+been set. This function will throw an error if it is not set, but only
+if the provided dbms is a platform that requires temp table
+emulation.
+In OHDSISql
, temp tables are referred to using a ‘#’
+prefix. For example:
dbWriteTable(connection, "#temp", cars)
## Inserting data took 0.053 secs
@@ -204,7 +241,8 @@ Temp tables in dbplyr
carsTable <- copy_to(connection, cars)
## Created a temporary table named #cars
-The compute()
function also creates a temp table, for example:
+The compute()
function also creates a temp table, for
+example:
tempTable <- person %>%
filter(gender_concept_id == 8507) %>%
@@ -214,16 +252,21 @@ Temp tables in dbplyr
Cleaning up emulated temp tables
-Emulated temp tables are not really temporary, and therefore have to be removed when no longer needed. A convenient way to drop all emulated temp tables created so far in an R session is using the dropEmulatedTempTables()
function:
+Emulated temp tables are not really temporary, and therefore have to
+be removed when no longer needed. A convenient way to drop all emulated
+temp tables created so far in an R session is using the
+dropEmulatedTempTables()
function:
dropEmulatedTempTables(connection)
-In our example, this does not do anything because were using a PostgreSQL server, which does natively support temp tables.
+In our example, this does not do anything because were using a
+PostgreSQL server, which does natively support temp tables.
We can use the dbDisconnect()
function, which is equivalent to the disconnect()
function:
We can use the dbDisconnect()
function, which is
+equivalent to the disconnect()
function:
dbDisconnect(connection)
vignettes/Querying.Rmd
Querying.Rmd
This vignette describes how to use the DatabaseConnector
package to query a database. It assumes you already know how to create a connection as described in the ‘Connecting to a database’ vignette.
This vignette describes how to use the DatabaseConnector
+package to query a database. It assumes you already know how to create a
+connection as described in the ‘Connecting to a database’ vignette.
The main functions for querying database are the querySql()
and executeSql()
functions. The difference between these functions is that querySql()
expects data to be returned by the database, and can handle only one SQL statement at a time. In contrast, executeSql()
does not expect data to be returned, and accepts multiple SQL statements in a single SQL string.
The main functions for querying database are the
+querySql()
and executeSql()
functions. The
+difference between these functions is that querySql()
+expects data to be returned by the database, and can handle only one SQL
+statement at a time. In contrast, executeSql()
does not
+expect data to be returned, and accepts multiple SQL statements in a
+single SQL string.
Some examples:
conn <- connect(dbms = "postgresql",
@@ -123,11 +132,22 @@ Querying## 3 3 8507 1977
executeSql(conn, "TRUNCATE TABLE foo; DROP TABLE foo; CREATE TABLE foo (bar INT);")
Both function provide extensive error reporting: When an error is thrown by the server, the error message and the offending piece of SQL are written to a text file to allow better debugging. The executeSql()
function also by default shows a progress bar, indicating the percentage of SQL statements that has been executed. If those attributes are not desired, the package also offers the lowLevelQuerySql()
and lowLevelExecuteSql()
functions.
Both function provide extensive error reporting: When an error is
+thrown by the server, the error message and the offending piece of SQL
+are written to a text file to allow better debugging. The
+executeSql()
function also by default shows a progress bar,
+indicating the percentage of SQL statements that has been executed. If
+those attributes are not desired, the package also offers the
+lowLevelQuerySql()
and lowLevelExecuteSql()
+functions.
Sometimes the data to be fetched from the database is too large to fit into memory. In this case one can use the Andromeda
package to store R data objects on file, and use them as if they are available in memory. DatabaseConnector
can download data directly into Andromeda objects:
Sometimes the data to be fetched from the database is too large to
+fit into memory. In this case one can use the Andromeda
+package to store R data objects on file, and use them as if they are
+available in memory. DatabaseConnector
can download data
+directly into Andromeda objects:
library(Andromeda)
x <- andromeda()
@@ -135,33 +155,59 @@ Querying using Andromeda objects sql = "SELECT * FROM person",
andromeda = x,
andromedaTableName = "person")
Where x
is now an Andromeda
object with table person
.
Where x
is now an Andromeda
object with
+table person
.
One challenge when writing code that is intended to run on multiple database platforms is that each platform has its own unique SQL dialect. To tackle this problem the SqlRender package was developed. SqlRender
can translate SQL from a single starting dialect (SQL Server SQL) into any of the platforms supported by DatabaseConnector. The following convenience functions are available that first call the render()
and translate()
functions in SqlRender
: renderTranslateExecuteSql()
, renderTranslateQuerySql()
, renderTranslateQuerySqlToAndromeda()
. For example:
One challenge when writing code that is intended to run on multiple
+database platforms is that each platform has its own unique SQL dialect.
+To tackle this problem the SqlRender package was
+developed. SqlRender
can translate SQL from a single
+starting dialect (SQL Server SQL) into any of the platforms supported by
+DatabaseConnector. The following convenience functions are available
+that first call the render()
and translate()
+functions in SqlRender
:
+renderTranslateExecuteSql()
,
+renderTranslateQuerySql()
,
+renderTranslateQuerySqlToAndromeda()
. For example:
persons <- renderTranslatequerySql(conn,
sql = "SELECT TOP 10 * FROM @schema.person",
schema = "cdm_synpuf")
Note that the SQL Server-specific ‘TOP 10’ syntax will be translated to for example ‘LIMIT 10’ on PostgreSQL, and that the SQL parameter @schema
will be instantiated with the provided value ‘cdm_synpuf’.
Note that, on some platforms like Oracle, when using temp tables, it might be required to provide the tempEmulationSchema
argument, since these platforms do not support tables the way other platforms do.
Note that the SQL Server-specific ‘TOP 10’ syntax will be translated
+to for example ‘LIMIT 10’ on PostgreSQL, and that the SQL parameter
+@schema
will be instantiated with the provided value
+‘cdm_synpuf’.
Note that, on some platforms like Oracle, when using temp tables, it
+might be required to provide the tempEmulationSchema
+argument, since these platforms do not support tables the way other
+platforms do.
Although it is also possible to insert data in the database by sending SQL statements using the executeSql()
function, it is often convenient and faster to use the insertTable()
function:
Although it is also possible to insert data in the database by
+sending SQL statements using the executeSql()
function, it
+is often convenient and faster to use the insertTable()
+function:
data(mtcars)
insertTable(conn, "mtcars", mtcars, createTable = TRUE)
In this example, we’re uploading the mtcars data frame to a table called ‘mtcars’ on the server, that will be automatically created.
+In this example, we’re uploading the mtcars data frame to a table +called ‘mtcars’ on the server, that will be automatically created.
For several reasons it might be helpful to log all queries sent to the server (and the time to completion), for example to understand performance issues. For this one can use the ParallelLogger
package. If the LOG_DATABASECONNECTOR_SQL
option is set to TRUE
, each query will be logged at the ‘trace’ level. For example:
For several reasons it might be helpful to log all queries sent to
+the server (and the time to completion), for example to understand
+performance issues. For this one can use the ParallelLogger
+package. If the LOG_DATABASECONNECTOR_SQL
option is set to
+TRUE
, each query will be logged at the ‘trace’ level. For
+example:
options(LOG_DATABASECONNECTOR_SQL = TRUE)
ParallelLogger::addDefaultFileLogger("sqlLog.txt", name = "TEST_LOGGER")
diff --git a/docs/articles/index.html b/docs/articles/index.html
index 01909731..a7795349 100644
--- a/docs/articles/index.html
+++ b/docs/articles/index.html
@@ -17,7 +17,7 @@
Changes:
+dbFetch()
function now respects n = -1
and n = Inf
arguments. Will throw warning if other value is used.Bugfixes:
+Fixing error about missing origin when fetching dates on older R versions.
Fixing RStudio connection panel information for DuckDB.
Changes:
Changing heuristic for detecting when almost running out of Java heap.
Setting default fetchRingBufferSize
for RedShift to 100MB (instead of 1GB) to preven Java out of heap errors, and overall better performance.
Using integers instead of strings to pass dates from Java to R for improved speed.
Using doubles instead of strings to pass datetimes from Java to R for improved speed.
Bugfixes:
-Bugfixes:
-rJava
and rlang
interaction causing no field, method or inner class called 'use_cli_format'
errors to be thrown when Java throws an error.rJava
and rlang
interaction causing no field, method or inner class called 'use_cli_format'
errors to be thrown when Java throws an error.Changes:
@@ -92,7 +102,8 @@Changes:
-Bugfixes:
+Bugfixes:
Fixed capacity < 0
error message when using a large Java heap space.
Fixed ‘optional feature not supported’ error when connecting to DataBricks using JDBC.
Fixed insertTable()
on Snowflake when data includes POSIXct
type.
Querying to Andromeda when using a DBI driver (instead of a JDBC driver) now also uses batching to avoid running out of memory.
Adding appendToTable
argument to querySqlToAndromeda()
, renderTranslateQuerySqlToAndromeda()
, and lowLevelQuerySqlToAndromeda()
.
Bugfixes:
-insertTable()
and all column names require quotes.insertTable()
and all column names require quotes.Changes:
@@ -133,11 +145,13 @@Bugfixes:
-connectionString
is empty string (instead of NULL
).connectionString
is empty string (instead of NULL
).Changes:
-Bugfixes:
+Bugfixes:
Fixing ‘DBMS not supported’ error when connecting to Hive.
Fixing error when bulk uploading to Postgress with NULL values.
Fixing warning when automatically converting Integer64
to numeric
in R 4.2.0.
Splitting vignette into two, because many users will only need to now how to connect to their database.
Improved error messaging related to the driver folder.
Bugfixes:
-getTableNames()
when the database or schema name contains escaped characters.getTableNames()
when the database or schema name contains escaped characters.Changes:
-dropEmulatedTempTables()
function.Bugfixes:
-dropEmulatedTempTables()
function.Bugfixes:
+Changes:
@@ -177,7 +194,8 @@Bugfixes:
-Changes:
@@ -227,24 +245,30 @@Changes:
-Bugfixes:
-Bugfixes:
+Changes:
-Bugfixes:
+Bugfixes:
Preventing scientific notation when bulk uploading to PDW to avoid error.
Fixing null error when calling getSchemaNames for BigQuery.
Changes:
-Bugfixes:
-Bugfixes:
+Changes:
-Bugfixes:
+Bugfixes:
Not adding ‘#’ prefix when performing insert into RedShift.
Disabling autocommit when sending updates to RedShift to prevent errors with new JDBC driver.
Preventing ‘FeatureNotSupportedError’ from terminating query on platforms that do no support autocommit.
Added support for inserting BIGINTs (large integers stored as numeric in R)
Applying CTAS hack to improve insertion performance for RedShift (was already used for PDW)
Bugfixes:
-Changes:
-Bugfixes:
+Bugfixes:
Changes:
Bugfixes:
-Changes: initial submission to CRAN
diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml index 6ab6313b..14070a25 100644 --- a/docs/pkgdown.yml +++ b/docs/pkgdown.yml @@ -1,9 +1,9 @@ -pandoc: 2.16.1 +pandoc: 3.1.1 pkgdown: 2.0.7 pkgdown_sha: ~ articles: Connecting: Connecting.html DbiAndDbplyr: DbiAndDbplyr.html Querying: Querying.html -last_built: 2023-06-22T14:57Z +last_built: 2023-06-28T14:59Z diff --git a/docs/pull_request_template.html b/docs/pull_request_template.html index bdbd6a58..ec1c3ebf 100644 --- a/docs/pull_request_template.html +++ b/docs/pull_request_template.html @@ -17,7 +17,7 @@