SENTINEL 1A-2A INCORPORATING AN OBJECT-BASED IMAGE ANALYSIS METHOD FOR FLOOD MAPPING AND EXTENT ASSESSMENT

: This study presents flood extent extraction and mapping from Sentinel images. Here we suggest an algorithm for extracting flooded areas from object-based image analysis (OBIA) using Sentinel-1A and Sentinel-2A images to map and assess the flood extent from the beginning to one week after the event. This study used multi-scale parameters in OBIA for image segmentation. First, we identified the flooded regions by applying our proposed algorithm on the Sentinel-1A. Then, to evaluate the effects of the flood on each land-use/land cover (LULC) class, Sentinel-2A images is classified using the OBIA after the event. Besides, we also used the threshold method to compare the proposed algorithm applying OBIA to determine the efficiency in computing parameters for change detection and flood extent mapping. The findings revealed the best performance for the segmentation process with an Object Fitness Index (OFI) is 0.92 when the scale parameter of 60 is applied. The results also show that 2099.4 km 2 of the study area is flooded at the beginning of the flood. Furthermore, we found that the most flooded LULC classes are agricultural land and orchards with 695.28km 2 (32.4%) and 708.63 km 2 (33.7%), respectively. In comparison, about 33.9% of the remaining flooded area has occurred in other classes (i.e., fish farm, built-up, bare land and water bodies). The resulting object of each scale parameter was evaluated by Object Pureness Index (OPI), Object Matching Index (OMI), and OFI. Finally, our Overall Accuracy (OA) method incorporated field data using the Global Positioning System (GPS) shows 93%, 90%, and 89% for LULC, flood map (i


INTRODUCTION
Floods are one of the most disastrous natural hazards that significantly affect human life, agriculture, infrastructure, and natural resources.This flood phenomenon puts intense pressure on governments to provide and develop accurate and reliable frameworks to prevent, manage and mitigate flood risk at local and regional scales [1][2][3][4].Flood mapping is considered a crucial step in flood risk assessment and mitigation.However, flood mapping using traditional in-situ and field surveys is timeconsuming, especially in large and extensive flooded areas.In some cases, like remote areas, it is impossible to use these surveys [4][5].In this regard, remote sensing technologies provide valuable information in various spatial-temporal resolutions to detect and map flood extent, which is faster and more cost-effective than traditional field surveys [4].Satellite images are a common way to map flooded areas during and after the flood.Due to technological advancements in remote sensing, over the last two decades, a breakthrough has been made in rapid and real-time flood mapping and damage assessment [6][7].Sentinel-2 satellites from European Space Agency (ESA) can provide multispectral images in both high spatial resolutions (10,20, 60 meters) and short revisit time (5days) and have relatively addressed former satellite limitations.However, they cannot capture images in cloudy weather as they use passive sensors which are dependent on solar reflectance of the earth or atmosphere [8].Therefore, they cannot properly monitor and map flood extent [9].Synthetic Aperture Radar (SAR) can capture images both day and night and in all weather conditions [8,10].Although SAR sensors seem more applicable in flood mapping than optical sensors, their data quality is affected by surface roughness, dielectric properties, and local topography following the radar look angle [11][12].As a result, the flooded areas appear in a low backscatter and appear as dark pixles in SAR images [11,13].The water surface works as a specular reflector, reflecting incoming radiation away from the SAR sensor [14].Other factors such as buildings, infrastructures, and vegetation that emerged from the water surface cause a double-bounce effect and have the same impact on water backscatter.Although these factors may prevent the detection of flooded areas, some other bodies, like dray and bare sands, can be wrongly detected as flooded areas.In mapping flooded areas, change detection is a widely used approach for extracting flooded areas using remotely sensed data [15][16].Thus, a wide range of unsupervised and supervised methods have been applied to optical and SAR data to detect natural hazards, particularly floods [11,17].Despite the problems that are associated with SAR image processing, there are various methods, including histogram thresholding [18], automatic thresholding [19], region growing [20], fuzzy classification [21][22], split-based approach (SBA) [23][24], active contour method (ACM) [25], texture analysis and objectbased [26][27] applied to map flood extent.Traditional pixelbased image classification algorithms mainly carry out flood mapping using optical data.Yet, they are limited due to low accuracy, sub-pixel problems, and, more importantly, the saltpepper noise [28][29][30].The OBIA has been widely used to overcome weaknesses related to pixel-based image analysis [31].The OBIA is a knowledge-driven methodology that attempts to imitate human perception to represent real-world features by merging a set of similar pixels into meaningful image objects through an image segmentation process [32].Some commonly used models, such as Support Vector Machine (SVM), decision tree (DT) [33], Knearest Neighbors (KNN) [34], artificial neural network (ANN) [35][36] and random forest (RF) have been integrated with OBIA for image classification, natural hazards mapping, and risk assessment.For example, Rodriguez-Galiano et al. (2019) [37] used Sentinel-1 SAR imagery to track floods.Also, Random Forest (RF) model was suggested by Breiman (2001) and employed multiple classifiers (trees) in classification.However, compared to methods mentioned for mapping flooded areas based on optical data, the existing approaches and SARbased mapping methods that can retrieve desired information from SAR images are complicated [9,11].An important question which needs address is how to use radar stack bands (R: HV, G: VV, B: HV/V) for image segmentation and considering the variation of return pulse due to the presence of multiple scattering sources from SAR images.Thus, we can extract better information and improve the determination of the flooded area process.In March 2019, a series of flash and fluvial floods happened in Khuzestan Province, southwest of Iran.This study used both optical and SAR data (Sentinel-1A GRD and Sentinel-2A) to map flooded areas.In this regard, our objective was to use radar stack bands (R: HV, G: VV, B: HV/V) for image segmentation and deliver the OBIA approach to extract flooded areas.Besides, this study evaluated the effects of the flood on each LULC class [35].The authors contributed a new approach based on the object-based image analysis (OBIA) to monitor floods and estimate the risk and the extent volume.Section 2 describes the study area and the data.Section 3 explains the method used and how to extract the flooded area.Section 4 presented the results and followed by a discussion in Section 5. Finally, we wrapped up the paper with a conclusion in Section 6

Study area
The study area is a large part of Khuzestan Province (plain of Khuzestan) in Iran and locates in 30°48'30"N and 32°4'30"N latitude and 47°51'17"E and 48°56'48"E longitude (Figure 1).Three great rivers, namely Karoon, Karkheh and Dez, originate from the Zagros Mountains, pass through Khuzestan Plain and shed directly or indirectly into the Persian Gulf.Karoon is the longest and largest river in Iran, joins the Dez River near Shushtar and falls into the Persian Gulf near Arvandrood.Besides, Dez River, before joining Karoon, passes through Dezful City.Karkheh is the third-longest river in Iran that flows to the southwest of the Hurralazim wetland on the Iranian-Iraqi border, following a 755-km route.Elevation in Khuzestan Plain ranges between 4 m to 250 m above means sea level (MSL).The Khuzestan Plain has a hot desert climate (BWh) with moderate winters (9°-20°C) and hot summers (37°-54°C), and average annual precipitation of 352 mm.

Datasets
This study used Sentinel-1A and 2A data (Table 1), incorporated field observation and GPS data, and the Digital Elevation Model (DEM) with 12.5 m resolution.We did not have a consistent set of images on the particular dates we aimed for, because of weather conditions and cloud appearance during the flood.Therefore, this study used Radar images (Sentinel-1A) beside optical images (Sentinel-2A).Ground truth data using GPS was used to classify the objects and distinguish various land use classes from other water bodies in the OBIA classification method incorporated with Sentinel-2A.Table 1.Sentinel Dataset.

Sentinel-1A
Sentinel-1 consists of two satellites called Sentinel-1A and Sentinel-1B.It is a part of the Copernicus program launched in 2014 and 2016.We used Sentinel-1A level-1 Ground Range Detected (GRD) images which were taken on 19.03.2019 (the peak flood time).HV polarization is used to detect damaging flood areas.It measures the part of the emitted waves polarized at the earth's surface and then returns vertically to the sensor.There is more contrast when transmitted and received waves are from different polarization.It means HV polarization gives more information about the earth's surface [38].VV polarization is also used to compare it with HV.The spatial distribution scattering in the image helped detect more differences between flood from other phenomena because of more energy in this polarization.

Sentinel-2A
We applied geometric and radiometric correction to correct the Sentinel-2A Level-1C data.The data was acquired in top of atmosphere (TOA) reflectance radiometric values.Nevertheless, atmospheric correction is needed.To apply atmospheric correction on Level-1C images, the Sen2Cor processor plugin is used, which performs geometric and radiometric corrections in Sentinel Application Platform (SNAP) software.Specific details of Sentinel-2A can be found in Li et al. (2020) [38], Goffi et  The study area has significant variable water volume during the flood and impacts land use such as agriculture and fish farms.Therefore, the three sets of Sentinel-2A above mentioned applied in this study to monitor the land use changes and determine how the flood event impacted each class.Thus, the land use map is updated for the flooded situation to estimate flood extent for each land use class and provide the newest information on an appropriate scale to the concerned management authorities.
We also selected the images from the spring season to apply the fusion technique in object-based classification.Thus, we extracted vegetation classes and the flood extent after exceeding the water from the original river-bed just before the flood concerning the original river-bed capacity.This study extracted the water class and the river-bed's final capacity from Sentinel-2A.

Topography
This study used topographic data (i.e., elevation and slope) to contribute the image classification and flooded area detection.The 12.5 m DEM from www.asf.alaska.edu is collected, which was generated by the ALOS PALSAR images.This study used the DEM in ArcGIS 10.7.1 to extract the slope (0° to 22°) map.

General workflows
This study generated LULC classification and mapped flood extent areas.Since, the biggest rivers of Iran are located in our study area, and on the other hand, in the last few years, the volume of these rivers water are decreased to the center of the riverbed a lot, it is needed to use an image that demonstrates that rivers bed be completely filled so that it is can consider the correct volume of water that has exceeded the river bed as a flood, that's why it is chosen the image of winter and before the flood to extract water class.On the other hand, in order to estimate the damage caused by the flood to different land uses, it is needed an image shows the up to date vegetation classes at their peak growth (The existing maps were old due to consecutive dry years ( , that's why it is chosen the image of 04.26.2018.Finally, it is merged both above mentioned images to map the land uses and validated with field data (97% accuracy).
It is applied OBIA for the image classification and used the threshold method.Then the extent of areas caused by the flood is assessed.The overall workflow is depicted in Figure 2.

Sentinel-2A spectral indices
Spectral indices are used to enhance image classification performance and for better object detection.Among a wide range of spectral indices, this study extracted Normalized Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Enhanced Vegetation Index (EVI) for LULC mapping and water bodies extraction.Due to the changes in weather conditions in different locations from one part to another, which caused to reduce the accuracy of NDVI values .
on the other word EVI was adopted to detect vegetation classes; as it created to have better consistency than NDVI, by accounting for several other factors that can cause NDVI range to come out different depending on the time, atmospheric and weather condition.We considered the indices to determine classes border.Also, NDWI, B8A and B11 indices helped to determine the water border.

OBIA and RF model
Despite the pixel-based method "salt and pepper" that appears when working with VHR images, OBIA has emerged as a new paradigm in satellite image analyzing (i.e., image classification and object extraction) [31].In OBIA, image segmentation is the most critical stage, and thus, wrong segmentation parameters (scale, shape, and compactness) can considerably affect the results [39][40].
In this study, multi-resolution segmentation, a bottom-up region merging method that merges small regions based on segmentation parameters thresholds, is applied to Sentinel-2A images.The segmentation parameters are usually obtained by expert knowledge and image characteristics through a trial-anderror process.the multi-resolution segmentation method is implemented to Sentinel-2A images in eCognition software (https://docs.ecognition.com).
The ESP2 tool was used to find the optimal scale parameter for the segmentation.This tool provides the user with three different scale (fine to coarse) values using the multi-resolution segmentation method.Therefore, the ESP2 tool resulted in three fine, moderate, and coarse-scale values of 20, 60, and 100, respectively.In addition, three geometrical and spectral indices (i.e., OPI, OMI, and OFI) is used to evaluate the appropriateness of the proposed scale parameters in image segmentation.The OPI evaluated the object integrity in terms of spectral characteristics.
Regarding the strong positive correlation between the spectral bands of Sentinel-2A, the standard deviation (SD) values of each object in the spectral bands are quite close to each other.However, the mean value of the spectral bands of each object varies a lot. the OMI is also used for assessing the spatial matching between the resulting objects and the reference features.The OPI and OMI are calculated using Eq. 1 and Eq.2: where SDB, SDG, and SDR refer to the SD of the spectral bands (blue, green, and red), respectively.The Max SD is the maximum SD value of the three applied multispectral bands.In the perfect case that each object's spectral variation is low and insignificant, the OPI values are close to 1. Whereas lower OPI values represent that the resulting objects are heterogeneous and the spectral bands' variances are high.
Area(R S) where R represents the ground truth reference objects and S is a resulting object-based segmentation process.An OMI value equal to 1 illustrates a perfect match between the reference and the resulting objects.The resulting OMI values that are lower than 1 show the over-segmentation problem.The values of more than 1 refer to the under-segmentation.
The OFI (Eq. 3) is additionally used to calculate the balance between the spectral and spatial index: The RF model is selected as the object classifier to classify the generated image objects and map LULC [41].The reason this model is chosen as our classifier is (i) it is insensitive to overfitting and noise, (ii) it requires a few parameters to apply, and (iii) computationally faster than other ensemble ML models.(iv) It could be applied to complex, high-dimensional data [42].In order to implement the RF model, it is defined two parameters, including the number of decision trees and the number of predictors at each node.In this study, the optimal classification result is derived by using 300 trees and 40 predictors at each node (see Section 4.2).

Flood extent extraction using Sentinel-1A
To develop the proposed algorithm, we considered the following steps.
Step 1: Preprocessing: In order to properly exploit Sentinel-1 GRD data, some pre-processing steps and corrections, including precise orbit calibration, thermal noise removal, calibration, speckle filtering, range doppler terrain correction and conversion to dB were done.A detailed explanation can be found in [6,14,43].
Step 2: We attempted the above steps for both HV and VV polarization image versions.HV, VV and band ratio of HV/VV images in backscatter values were stacked in RGB format.The reason to generate an image ratio is to visually interpret the SAR image, and perform image classification and extract flooded areas.
Step 3: We used an appropriate segmentation network for object-oriented and classification stages, which are described in the next step of this workflow.
The following algorithm was developed to combine contiguous objects and introduced similar textures and other properties to extract flooded objects.)2 ) was another effective function that flooded places adopted 0.08 value in this case.Then the Feature Space Optimization (FSO) is use to identify the best combination of features for image object classification.As a result, using the RF classifier, the entire image was classified into two flood and non-flood classes, then the flooded extent was extracted.

Thresholding
Histogram thresholding is one of the most important methods to segment monochrome images, which involves segregating the image into various gray-scale ranges based on peaks in the histogram [44].To identify flooded areas using this technique, it is assumed that the backscatter values of target regions in SAR images are very low.Therefore, we just needed to select pixels below a given threshold value to recognize the affected areas.However, determining the suitable threshold has a great effect on the results.Thus, the brightness variance of the pixels should be estimated for flooding to set the threshold criteria.The flooded pixels can be identified as [45]: where PD is the flooded pixels when μ reaches the mean of the pixels, and σ is the standard deviation of the pixels.
In order to determine flooded pixels in the image, we need to define a suitable threshold.Therefore, based on the pixel brightness variance, we can estimate the flooded region and set the threshold criteria.This study attempted several try and error iterations using SNAP software incorporating field visits for the flood.Thus, an optimal threshold is identified, which is 1.5.The optimal value of kf was determined to be 1.5 for this region based on several iterations.The kf value calibration and other criteria, including the amount of residual speckle and its coverage and the distinction of characteristic flooding types, are tested (i.e., inundation of dry streambeds and proximity to rivers).In the post-processing step, removing the shadow effect from the results was necessary.The shadow effect interferes where the radar wave is confined from reaching the ground surface.In other words, the low-backscattering regions such as water areas in the SAR are similar to the shadow, thereby creates regions of false positives for flooding.Therefore, the DEM, and the regions with slope values > 15° were employed and assumed to be shadows and removed from the results.

Validation and accuracy assessment
For image classification and the flood extent extraction accuracy assessment, OA and Kappa were used.Since image objects were classified, first, the validation points were obtained using an inventory map, fieldwork (Figure 3) and GPS data, then overlaid them with image objects.In the next step, corresponding objects were labelled as the test data.Using a confusion matrix for the Sentinel-2A classified map, all metrics for each class were calculated.Next 650 test objects werecarried out with the same procedure to evaluate the accuracy of the extracted flood extent using Sentinel-1A data.Finally, the proposed algorithm vs. threshold method are tested to determine the results and efficiency in computing the parameters for change detection and flood extend mapping performance.
Figure 3. Drone images taken from the field visits.

Image segmentation and object evaluation
Different parameters were used throughout the segmentation process.Visual inspection segmentation result with shape 0.7 and compactness 0.3 was selected as the best image segmentation parameters.Table 2 illustrates the segmentation with a scale of 60, which resulted in the optimal objects.Table 2 shows the values of OPI, OMI, and OFI for each scale parameter of the segmentation is applied.Whereas scales lower and higher than 60 have over-segmentation and undersegmentation problems, respectively.B8A and B11 heavier weight than other bands are considered because, in 864.7nm and 1613.7nm,water reflectance is more than other wavelengths.Therefore, this study shows a scale number of 60 is the best segmentation result compared to others.

LULC mapping using OBIA and RF model
In OBIA, processing units are image objects that enable us to calculate and derive additional statistical and geometrical information from LULC mapping objects.object features are calculated, including shape index, object location, area, length to width, rectangular fitness, roundness, compactness, extent, NDVI mean, brightness, SD, mode, median, ray-level cooccurrence matrix (GLCM) mean, GLCM contrast and GLCM homogeneity.This study identified the best combination of features to classify image objects (Table 3).The RF model (classifier) resulted in the configuration of image objects (see Section 3.3) is depicted in Figure 4.

Proposed algorithm and flood mapping
In order to map flooded areas, the difference between the backscatter values of water bodies (flooded areas) and other areas are important factors in identifying flooded regions.In this case, for better mapping of flooded areas and visualization, Sentinel-1A images in HV and VV polarizations, along with the band ratio of HV and VV (Figure 5) were stacked together using the proposed algorithm.The study shows that backscatter values for flooded areas (objects) in HV and VV polarizations range from -23.5 dB to -15.95 dB and from -21.03 dB to -11.65 dB, respectively.While these values for non-flooded areas for HV polarization vary between -14.44 dB and -7.14 dB, and for VV polarization range from -9.85 dB to -4.59 dB.
To precisely determine the flooded area, supporting data such as DEM and slope are used.Then by using training data of the flooded area, the RF model was trained and applied on image objects.Thereafter, the flooded classification areas and maps are generated (Figure 6).The classification result shows that 2099.4 km2, which is equal to one-quarter of the study area was flooded.Spatial overlay analysis in ArcGIS indicated that the classes with the highest proportion of flooded areas comprise agricultural land and orchard by 32.4% and 33.7%, respectively.Thus, it shows that the flood event mainly extended these two classes.The remained 33.9% of the flooded area is the bare land, water bodies, built-up and fish farm classes (i.e., 15.6%, 10%, 7.3% and 1%), respectively (see Table 4).The situation a week after the flood using Sentinel-2A data images is considered.Sentinel-2A data was used to map the flooded LULCs and new water bodies.The Sentinel-2A went through the pre-possessing procedure.Multispectral bands of the image were used to extract flooded areas by calculating the NDWI index.In this case, NDWI values bigger than 0.3 were considered water; therefore, in ArcGIS 3.6, regions with a higher value of 0.3 were extracted and overlaid with the flood extent at the peak of the event (Figure 7).In addition to what was mentioned above, the threshold method is also used in SNAP to map the flood extent and tested it against the threshold method.We found almost similar results with slightly better performance efficiency in computing time ( about 15% less).Figure 8 depicts the flooded areas belonging to 3.19.2019and 3.25.2019.Table 4 illustrates that the highest flood extent rate is related to the farming fields and orchards due to the continuation of the flood event.At the beginning of the flood, 32.4% of farming fields were flooded, while one week after the flood was 33.7%.It is detected that the flooded fish farm has extended from 1% to 22.33%, Built-up areas where involved with the flood had decreased from 7.3% and 5.74% as well.
Considering Sentinel-2A data one week after the flood event, the results revealed that some areas were still flooded one week after the event and extended to new areas due to the water flood brought in.The additional sample points of training and testing RF classifierare used s to evaluate the accuracy of produced maps such as LULC and flood extent by Sentinel-1A and Sentinel-2A data.For this, a confusion matrix is generated for LULC and flood maps , and then OA and Kappa metrics arecalculated.sample points and visual inspec are also used tion to assess the accuracy of the maps.The result of accuracy is illustrated in Table 5.We categorized flood levels in the area using Sentinel-1A application during the cloudy and rainy situations that supported understanding of the situation timely.We required both HV and VV polarizations to extract more information about flooded bounds.Therefore, RGB from HV, VV, and HV/VV seemed to be efficient.RGB led as an input of the proposed algorithm based on the OBIA method after calibrating and transforming into backscatter values.Setting shape factor on 0.7 more than compactness (0.5) on a scale 60 provided us with the best basic segmentation among other tried and tested segmentation as the most challenging step in OBIA.The it is weighting candidate to gain this segmentation was also determined 3,2,1 for HV, HV/VV, and VV, respectively, which in turn demonstrates the importance of HV polarization more than VV in this framework.The result showed that it is a flooded zone where HV Mean goes to 1, and its Standard Division is lower than 0.08.HV to VV ratio Mean is another factor that participated in extracting flooded area in the algorithm where it was greater and equal to 0.8.In fact, by using the HV polarization product of sentinel-1A individually and thenby taking its beneficial impact on VV, we got the optimal result in mapping the flood during the event.
However, after a week, it is monitored the study area and determined a new situation.When it was a clear sky a week after the disaster, flooded areas were extracted by applying NDWI on the Sentinel-2A imagery.It is estimated that the area under the flood was reduced to 627.22 km2.Bare land and orchard classes had experienced more reduction, whereas water bodies and fish farm areas had low, reducing flood levels.Meanwhile, newly flooded areas have considerably appeared, damaging agricultural zones more than other LULCs classes.

CONCLUSION
In March 2019, due to heavy rainfalls, a series of floods hit northern, western, and southeastern parts of Iran and caused severe damages, especially in Khuzestan Province.This study concluded that assessing the flood extent in Khuzestan Plain using Sentinel-1A and Sentinel-2A and applying the proposed algorithm in OBIA has improved efficiency (shorter time 15% vs. threshold method) in computing change detection and mapping.We applied the OBIA method to Sentinel-2A data to map LULCs in the study area, and then, flooded areas were mapped using Sentinel-1A and Sentinel-2A data.The threshold method also played an auxiliary role in testing the proposed algorithm, whereas we gained almost similar results from the two methods in mapping the flood extent areas.Accuracy assessment of the OA and Kappa indicated the acceptable performance of image classification for LULC mapping and detecting flooded areas OBIA method and the proposed algorithm.
Due to severe weather, it is almost impossible to map flooded areas through fieldwork.This study concluded that the proposed algorithm could work on Sentinel data to assess flood and generate extent maps utilizing OBIA.The combination of data with a proper OBIA method provided accurate information on flooded and extended Khuzestan Plain areas.Besides, our approach to extracting flood maps from Sentinel-1A concluded a supportive strategy to flood extent mapping.Nevertheless, the authors suggest using continuous remote sensing data and developing a deep learning method to predict future potential hazards [46][47][48].Finally, due to a short revisit time, excellent spatial resolution and being free of charge, the Sentinel constellation is the best option for that aim.

Figure 1 .
Figure 1.Study area in Khuzestan Province, southwest of Iran, where a series of flash and fluvial floods occurred.

Figure 2 .
Figure 2. Change detection and flood extent maps process in three steps: Methodology flowchart.

Algorithm1: 4 :
Extracting flooded area from Sentinel-1 GRD Input: Radar stack bands (R: HV, G: VV, B: HV/V), i and n are the number of object Apply algorithm.Image segmentation (Scale: 60, compactness: 0.5, Shape: 0.7, weight R: 3, G: 1, B: 2) In the next step of the workflow, first, adopt the appropriate scale, compactness, shape and bands' weight.Next, the object's threshold is consider in various factors and functions considered before.For example, the mean of HV polarization and HV/VV played a key role in detecting flooded areas and the Standard Deviation of HV.Furthermore, sample points that were stored flood using Garmin eTrex 10 GPS were overlaid with segments to select training image objects for flood classification.Step 5: Finally, the objects which have a Mean value in HV and HV/VV polarization 1 and, ≥ 0.8, are filtered respectively.HV'S Standard Deviation

Figure 4 .
Figure 4. Pre-flood classification map of the study area using RF model.

Figure 5 .
Figure 5. Optimal RGB visualization and segmentation of Sentinel-1A data.(a) RGB visualization of VH, VV and HV/VV bands, (b) optimal segmentation result of Sentinel-1A data RGB image using multiresolution segmentation.The segmentation parameters are multiplied by 55, shape index 0.8 and compactness 0.2 and were selected among six different combinations.

Figure 6 .
Figure 6.Sentinel-1A stacked images and flood mapping result.(a) RGB composition of HV, VV, and HV/VV bands of Sentinel-1A, and (b) classification result of the Sentinel-1A using the OBIA method.

Figure 7 (
a) indicates the NDWI index variation over the study area.As it can be seen, values close to 1 show water and flooded areas and values close to -1 present areas without the water content.

Table 2 .
Parameters used for the multiresolution segmentation.

Table 3 .
Mean and SD values of the features selected by FSO for classification in different scales.

Table 4 .
LULC extent at the beginning and after the flood.

Table 5 .
Accuracy assessment result of LULC map, flood map and inundation map.One of the image processing techniques in image segmentation is a prerequisite for flood detection.Sentinel image segmentation comprises object recognition and delineation of the flooded area.Using shape and compactness made an effective evaluation of the quality of image segmentation and image classification.Comparison of the proposed approach and threshold method determined the effectiveness of the best segmentation result.OBIA classification revealed that the most impacted areas belong to agricultural activities like farming and fruit production.The classification result shows six main types of LULC: bare land, built-up, agricultural land, orchard, river and water body and fish farm in the study area.Due to the high similarity between water bodies and fish farms, the classification process was impossible to discrete.Therefore, for the post-classification stage, the most recent map (1:25,000 scale) of fish farms are obtained from the Agriculture Organization of Khuzestan Province, Iran.Then they are compiled to generate the final classification map.The flood has affected various parts in the west and east, some parts of the north and many regions in the southern part of the study area at the flood peak time (03.19.2019).This flood in March 2019 in Khuzestan Province, Iran, has affected the central and eastern parts of the area more than the west.Furthermore, southern zones indicated a more noticeable result associated with increased flood volume (03.25.2019).The threshold technique was also efficient for monitoring the flood situation a week after the event.More than three-quarters of the central part of the study area (1454 km2), which belongs to agricultural lands and orchards, received the most flood damage during the flood peak.The other LULCs classes that were hit by floods such as bare land, built-up, water body, and fish farm so that the highest value belongs to the bare land (15.6%) and its lowest belongs to the fish farm with 1% Monitoring and mapping flooded areas considering the damage to the LULCs classes proposed the safe areas during the event.