--- title: "Deploying RRASSLER" output: rmarkdown::html_vignette description: | "How to construct and interact with your own 'HECRAS_model_catalog'" vignette: > %\VignetteIndexEntry{deploying-rrassler} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- # Deploy RRASSLER over sample data This tutorial will walk you through the technical steps you'd use to run RRASSLER over some sample data and a cursory overview of the resulting data RRASSLER creates. As outlined in the [ingest steps](https://NOAA-OWP.github.io/RRASSLER/articles/ingest-steps.html) documentation, RRASSLER can be thought of as a two step process. That first step is to ingest HEC-RAS models into the RRASSLE'd structure, and the second step is to generate the spatial index. We'll make the assumption you are at least passingly familiar with the form and function of HEC-RAS data, but if not go back and read [Ingest steps](https://NOAA-OWP.github.io/RRASSLER/articles/ingest-steps.html). ## Assessing the sample data ### Sample data from BLE Our first set of sample data comes to us from the FEMA region 6 Base Level Engineering data from https://ebfedata.s3.amazonaws.com/ based on pointers from https://webapps.usgs.gov/infrm/estBFE/. See also the Texas specific dashboard at https://www.arcgis.com/apps/dashboards/1e98f1e511fc40d3b08790a4251a64ee for more BLE models. For testing purposes, we've selected 3 models from HUC 12090301 and left them in their original folder structure ("\12090301\12090301_models\Model\Alum Creek-Colorado River\"). All models use English units and are assumed to have "[EPSG:2277](https://epsg.io/2277)" as their projection. ### Sample RAS2FIM data Our last set of sample data is taken directly from the [RAS2FIM](https://github.com/NOAA-OWP/ras2fim) repo from Iowa. This folder ("ras2fim-sample-dataset") has both the input and output from a [RAS2FIM (v1)](https://github.com/NOAA-OWP/ras2fim/tree/V1) run, and the models are given to us in "SI Units" and in "[EPSG:26915](https://epsg.io/26915)". To save you the keystrokes, I've provided three different sections below which all RRASSLE sample data to a location of your choice. You'll need to decide for yourself how and where you want to run, but there is no reason to run them all. The first step is to decide what path you want: # Step 1: Select a location ## A) Internal to the package ```{r, eval = FALSE} ras_dbase <- fs::path_package("/extdata/sample_output/ras_catalog/", package = "RRASSLER") ``` ## B) As a developer using a rockerd docker setup ```{r, eval = FALSE} ras_dbase <- "../inst/extdata/sample_output/ras_catalog/" ``` ## C) A Local path ```{r, eval = FALSE} ras_dbase <- file.path("~/data/ras_catalog/") ``` ## D) Or set a cloud account and path up like so: ```{r, eval = FALSE} Sys.setenv("AWS_ACCESS_KEY_ID" = "AKIASUPERSECRET","AWS_SECRET_ACCESS_KEY" = "evenmoresecret","AWS_DEFAULT_REGION" = "us-also-secret") ras_dbase <- "s3://ras-models/" ``` ## Step 2: Ingest sample data ```{r, eval = FALSE} RRASSLER::ingest_into_database(path_to_ras_dbase = ras_dbase,top_of_dir_to_scrape = "../inst/extdata/sample_ras/FEMA-R6-BLE-sample-dataset/",code_to_place_in_source = "test: FEMA6",proj_override = "EPSG:2277",apply_vdat_trans = FALSE,is_quiet = FALSE,is_verbose = TRUE,overwrite = FALSE,parallel_proc = FALSE) RRASSLER::ingest_into_database(path_to_ras_dbase = ras_dbase,top_of_dir_to_scrape = "../inst/extdata/sample_ras/ras2fim-sample-dataset/input_iowa/",code_to_place_in_source = "test: Iowa input",proj_override = "EPSG:26915",apply_vdat_trans = FALSE,is_quiet = FALSE,is_verbose = TRUE,overwrite = FALSE,parallel_proc = FALSE) RRASSLER::ingest_into_database(path_to_ras_dbase = ras_dbase,top_of_dir_to_scrape = "../inst/extdata/sample_ras/ras2fim-sample-dataset/output_iowa/05_hecras_output",code_to_place_in_source = "test: RAS2FIM V1 outputs",proj_override = "EPSG:26915",apply_vdat_trans = FALSE,is_quiet = FALSE,is_verbose = TRUE,overwrite = FALSE,parallel_proc = FALSE) ``` ### Extra: How to RRASSLE other sample data > Note: This is a large dataset and can take upwards of 20 hours to run! ```{r, eval = FALSE} ## Note: paste HUC8 key into folder RRASSLER::ingest_FEMA6_BLE(path_to_ras_dbase = ras_dbase,HUCID ="12090301",proj_override = "EPSG:2277",apply_vdat_trans = FALSE,is_quiet = FALSE,is_verbose = TRUE,overwrite = FALSE,parallel_proc = FALSE) ``` (and if you need to restart it should a hiccup occur) ```{r, eval = FALSE} RRASSLER::ingest_into_database(path_to_ras_dbase = ras_dbase,top_of_dir_to_scrape = file.path(ras_dbase,"_temp","BLE","12090301","12090301_models",fsep = .Platform$file.sep),code_to_place_in_source = "FEMA Region 6:12090301",proj_override = "EPSG:2277",apply_vdat_trans = FALSE,is_quiet = FALSE,is_verbose = TRUE,overwrite = FALSE,parallel_proc = FALSE) ``` ## Next steps At this point in your RRASSLER workflow, we've ingested all the models we have on hand. In a "real world" deployment, you might have hundreds or thousands more to ingest, and so you'd work your way through those before applying the rest of the post-process routines. However, you can also deploy and redeploy those as needed, so the article on [mapping RRASSLER](https://NOAA-OWP.github.io/RRASSLER/articles/mapping-rrassler.html) will walk you through how to accomplish that and what we generate with those post-processing steps.