## Abstract

This paper presents the formulation of an adaptive finite volume (FV) model for the shallow water equations. A Godunov-type reformulation combining the Haar wavelet is achieved to enable solution-driven resolution adaptivity (both coarsening and refinement) by depending on the wavelet's threshold value. The ability to properly model irregular topographies and wetting/drying is transferred from the (baseline) FV uniform mesh model, with no extra notable efforts. Selected hydraulic tests are employed to analyse the performance of the Haar wavelet FV shallow water solver considering adaptivity and practical issues including choice for the threshold value driving the adaptivity, mesh convergence study, shock and wet/dry front capturing abilities. Our findings show that Haar wavelet-based adaptive FV solutions offer great potential to improve the reliability of multiscale shallow water models.

- finite volume method
- Haar wavelets
- multiresolution
- one-dimensional (1D)
- shallow water modelling

## INTRODUCTION

Within computational hydraulics, the finite volume (FV) Godunov-type method has been extensively used for modelling shallow water flows due to its desirable properties, such as locality, numerical mass conservation and ability to inherently incorporate discontinuous flow transitions within the numerical solution (Toro 2001; Bouchut 2007; Toro & Garcia-Navarro 2007).

The original Godunov method for the gas dynamic equations assumes that the representation of the numerical solution over each discrete control volume is *piecewise-constant*. Connecting these approximate solutions via fluxes obtained from the solution of Riemann problems between cells leads to make the scheme well suited for solving nonlinear equations containing discontinuities (Harten *et al.* 1983). Further theoretical and numerical considerations have been given to apply Godunov-type FV methods to shallow water flow problems in order to properly incorporate source terms (mainly the bed slope term) and model evolving wet/dry fronts (LeVeque & George 2008; Liang & Borthwick 2009; Liang & Marche 2009; Kesserwani 2013). Following decades of research, FV Godunov-type methods have become widely applied to simulate real-scale flooding and have been adopted into commercial hydraulic modelling software packages such as TUFLOW-FV and RiverFlow2D PLUS. Nevertheless, real large-scale shallow flows have complex flow features such as shocks, contact discontinuities and a wide range of spatial scales. Typically, the computational domain is discretized uniformly using a large number of cells, given that the position of flow features is usually unknown and capturing certain small scales within a coarse mesh simulation may be difficult without causing computational cost trade-off. Therefore, automated mesh adaptation comes in handy to improve modelling efficiency and capture the various physical scales involved in shallow water flows.

Various adaptive techniques have been developed within the FV framework applied to solve shallow water equations (SWE). These include moving mesh methods (Skoula *et al.* 2006) or static grids with local refinement methods (Nikolos & Delis 2009; Caviedes-Voullième *et al.* 2012). However, most of the present techniques to date are achieved over patches of grid refinements, in a decoupled manner, which controversially gives rise to many problematic effects (Nemec & Aftosmis 2007; Liang *et al.* 2008; Kesserwani & Liang 2012a, 2015; Mocz *et al.* 2013). For instance, they require error sensors and multiple user-chosen parameters (e.g., for setting up grid resolution coarsening vs. refinement), which introduce sensitivity (e.g., can lead to inadequate or excessive resolution), inflexibility and problem dependency (e.g., due to the need to tune many parameters for each simulation problem). They also lack a rigorous strategy to accommodate flow data transfer and recovery between various inner resolution scales (given the changing nature of the mesh). Therefore, the design of a more integrated adaptation approach, which can address such problems, is still a crucial challenge (Hovhannisyan *et al.* 2014) and is the aim of this work.

The classical wavelet theory for local decomposition/reconstruction of (*self-similar*) functions (by translation and dilatation of a single mother function, or wavelet, at finer resolution) offers a natural way for adaptive compression of data. Mathematicians and engineers have widely used the multiresolution capabilities of wavelets in many applications (e.g., signal processing and denoising) (Gargour *et al.* 2009) including the solution of partial differential equations. The true power of such a multiresolution approach arises from the fact that the local *piecewise-constant* data can be described as a set of scaling coefficients encapsulating higher-resolution details thereby allowing (Harten 1995: Keinert 2003): (a) genuine information exchange between those (heterogeneous) elements with matching resolution (promotion and demotion of the details via application of high- and low-pass filters); (b) achievement of grid resolution adaptivity by selecting certain details from the local compression dataset; and (c) quantitatively control the variation of the adaptive mesh solution with reference to the underlying uniform mesh solution relevant to the finest resolution.

In the context of the FV Godunov-type modelling, a couple of papers have successfully integrated the wavelet theory for adaptive solution of homogenous conservation law (scalar and system) (Harten 1995; Müller 2003). However, the implementation and implication of this idea in addressing practical aspects of shallow water flow simulation is unexplored as yet. To fulfil this gap, the Haar wavelet basis is used to reformulate a FV Godunov-type method to obtain a new adaptive multiresolution scheme, which will be referred to as Haar wavelets finite volume (HWFV). The technical development of the HWFV solving the one-dimensional (1D) SWE with source term and wetting and drying is described. Particular focus is mainly placed on how wavelet-based adaptivity is achieved within the HWFV SWE numerical solver. Selected test cases are used to systematically verify the performance of the HWFV scheme addressing issues of adaptivity parameterizations, modelling of irregular topography with/without wetting and drying, accuracy preservation and mesh convergence.

The rest of the paper is organized as follows: a brief overview of 1D SWE is first presented, followed by a section describing the baseline FV method that will be used. Second, the multiresolution analysis (MRA) and its mathematical properties are introduced, with particular focus on how the Haar wavelets basis and its scaling basis are incorporated into the FV method. Finally, the performance of the HWFV model is tested, analysed and discussed, and conclusions are drawn.

## SWE

The 1D SWE considering the bed source term can be cast in a conservative matrix form:
1
2where *t* is the time (s), *x* is space (m) and **U**, **F**(**U**) and **S** are the vectors containing the conserved variables, the fluxes and the bed source terms, respectively, in which *h* is the water depth (*m*), *q* is the flow rate per unit width (m^{2}/s), *g* is the acceleration gravity (m^{2}/s) and *z* is the bed elevation (*m*).

## OVERVIEW OF THE FV GODUNOV-TYPE FRAMEWORK

In this section, the Godunov-type FV method is briefly presented (Garcia-Navarro & Vazquez-Cendon 2000; Fraccarollo *et al.* 2003; Bouchut 2007). The computational domain is divided into uniform and non-overlapping cells. A cell *i* is defined as with a cell size and a centre . Integrating Equation (1) in space over the *ith* cell and time interval yields the following conservative discrete form of the SWE:
3where and are piecewise-constant average representing the local numerical solution at the present and the next time level, respectively. are the numerical fluxes at cell interfaces, which are approximated according to a two-arguments numerical flux function, , which is based on the approximate Riemann solver of Roe (Roe 1981), and includes the bed slope source term. Alternatively, Equation (3) can be rewritten in the following semi-discrete form:
4
5

The techniques adopted for well-balanced discretization of the bed slope source term with wetting and drying are well-established and verified. For more details see Liang & Marche (2009), Kesserwani & Liang (2012b) and Kesserwani (2013). Herein, the threshold for dry cell definition is fixed to . The stability condition on the time step size (Cockburn & Shu 2001) is controlled by a Courant–Friedrich–Lewy condition, which is chosen to be equal to 0.98 for simulations involving fully wet domains and 0.5 when wet/dry fronts are present.

## MRA

In this section, the multiresolution framework, along with the choice of basis functions for the HWFV scheme, is presented. The key feature of MRA is the separation of the behaviour of functions at different resolutions. For simplicity, MRA over each cell *I _{i}* is presented for the reference interval [−1, 1] on which each cell is rendered. For example, any function

*f*supported on a baseline interval

*V*

_{0}can be described on a dyadic sub-division of the baseline interval such that (Harten 1995). In particular, this property is valid for a function space that can be spanned by a scaling basis

*ϕ*(Figure 1). From the father basis

*ϕ*, it is possible to span any sub-space

*V*via dilation and translation (Keinert 2003): 6where

_{n}*n*is the dilation index and

*j*is the translation index.

The wavelet sub-spaces *W _{n}* (

*n*≥ 0) come into play as an orthogonal complement of

*V*in

_{n}*V*

_{n+}_{1}and they satisfy the conditions: 7a 7b

Taking the Haar wavelet as a mother basis that spans *W*_{0} (Figure 2), any sub-space *W _{n}* can also be spanned via its dilation and translation:
8

From Equation (6), the function *f* can be described in any space *V _{n}* as a linear combination of scaling bases and scaling coefficients :
9

This is called the single-scale expansion of *f* at resolution *n*, where can be computed (or initialized) as:
10

Owing to the nested sequence of subspaces and the use of Equations (6), (7), (8) and (10), an alternative expression of *f* can be obtained by means of the coarse description in *V*_{0} (i.e., at resolution *n* = 0) and its complementary details in spaces *W _{n-}*

_{1}: 11This representation is called the multiresolution expansion of

*f*at resolution

*n*, where the detail coefficients can be computed (or initialized) as: 12

In the expansion (Equation (11)), the representation of the function *f* at higher resolution (*n* > 0) is enabled by the detail coefficients when they are non-zero.

### Two-scale transforming coefficients

Owing to the *orthonormality* and the *compact support* properties of and , rigorous exchange of data across two resolution levels can be obtained by the so-called two-scale transformation. To do so, two types of *filter bank* coefficients (a common term in the field of signal processing) are needed. For simplicity, the two-scale transformation is explained for data transfer between the resolution (0) and (1). The first type of coefficients {*k _{0}, k_{1}*} is associated with

*low-pass filters*, i.e., Equation (13); whereas the second type of coefficient {

*g*} is associated with

_{0}, g_{1}*high-pass filter*s, i.e., Equation (14): 13a 13b 14a 14b

Low-pass filter coefficients {*k _{0}, k_{1}*} are used to merge the two scaling coefficients at resolution (1) into one scaling coefficient at resolution (0) (this is referred to as demoting):
15a

High-pass filter coefficients {*g _{0}, g_{1}*} are used to obtain (or actually store) the complement details (or detail coefficients) of the scaling coefficients at resolution (1) in resolution (0):
15b

Combined use of both filter coefficients allows to compute the scaling coefficients at resolution (1) from the scaling coefficients at resolution (0) and their (stored) complement detail coefficients (this is referred to as promoting): 16a 16b

By recursive application of the two-scale transform equations, the expansion of scaling coefficients describing a function *f* can be demoted or promoted across any different resolutions (Figure 3). Consequently, promoting and demoting the coefficients across different resolutions, i.e., by using Equations (15) and (16), is the keystone of the HWFV scheme.

## HWFV METHOD

This section shows how to exploit the multiresolution description of function *f* (recall the section ‘MRA’) to decompose the local piecewise-constant solution (recall the section ‘Overview of the FV Godunov type-framework’) as a compressed data set of the solution's information, which drives adaptivity. The strategy is to first generate a FV discretization at the highest resolution (herein presented for *n* = 2 without loss of generality) to produce a fine uniform reference mesh. Then, over this mesh, MRA is performed to select up to which resolution the local numerical solution needs to be described (i.e., form a heterogeneous grid for actual FV calculations). In doing so, three main steps are needed: prediction for careful mesh refinement, multiresolution FV update and thresholding step for mesh coarsening through solution decompression.

### Multiresolution FV formulation

Following the discretization presented in ‘Overview of the FV Godunov type-framework’ section, a higher resolution mesh is introduced. Each cell of the baseline coarse mesh, i.e., presented in that section, is subdivided into cells namely: of a local spatial resolution . A sub-cell centre is denoted by . Figure 4 shows the multiresolution stencil. In this multiresolution setting, the FV formulation, described in Equations (4) and (5), can be rewritten as: 17 18

Here, the translating index *j* is related to the resolution level of the local cell *n*.

Equations (17) and (18) need to be only applied at a local level resolution (i.e., on the adaptive mesh). This mesh can be selected by, first, casting each local solution as in Equation (11). This requires demoting high-resolution data in order to store detail coefficients and produce the scaling coefficient at level (0). Then, a mesh prediction step is needed to locally promote the solution and refine the grid. This will yield the adaptive mesh on which calculation is performed. Finally, hard thresholding is performed to reduce the density of the mesh by discarding all non-significant details that fall below a certain threshold value (details in the sections below).

### Prediction step for mesh refinement

Since the flow field is evolving in time, the prediction step must be performed after each time step to guarantee no significant features of the numerical solution are omitted in the next time step. The prediction strategy is only based on the information available at the current time level (Müller 2003; Müller & Stiriba 2007). Generally, the detail coefficients of predicted cells are not available. Thus, the local solution over the predicted cells is promoted by simply setting zero detail coefficients (Harten 1995). To do this, the following algorithm is used to identify those neighbourhood cells that needs to be further refined:

a) Find the scale coefficients of the conserved variables at level .

b) Compute the normalized gradient between the local cell and its neighbour cell (see Figure 5(a)). 19

c) Introduce two indicators to compare values and decide the resolution levels of neighbouring cells that have significant details. Here, the values of = 0.1 and = 0.05 are chosen as indicators for all numerical test cases. The decision for the local refinement mesh will take the following form according to the value of . If the adaptive mesh is as in Figure 5(c). If the adaptive mesh is as in Figure 5(b) and if the mesh has the form illustrated in Figure 5(a).

### Thresholding step for the decompression of mesh

Thresholding is applied on the detail coefficients after each update to decompress the mesh. The values of detail coefficients () become small when the numerical solution is smooth. Therefore, they can be cancelled without substantially affecting the accuracy of the numerical solution. To do so, all detail coefficients whose absolute values are below a normalized level-dependent threshold value are discarded, i.e., if they satisfy 20

The components in are associated with the components of the conserved variables in (i.e., *h* and *q*). In real computations, it is impossible to know the optimal threshold value but a range of options is feasible (Gerhard & Müller 2013; Hovhannisyan *et al.* 2014) as will be investigated in the following section. By default, in this work, the threshold value (at the coarsest level) is set to and it is normalized according to the resolution levels. In addition, the magnitudes of the detail coefficients are scaled according to the maximum value of the numerical solution.

### Adaptivity of topography

The topography is projected into the highest resolution (*n* = 2) in the same way as the conserved variables. Thus the compressed data set of bed information over each cell is obtained. Finally, the adaptive bed mesh is obtained by applying the thresholding step. However, the topography adaptation is performed once (i.e., initially when ) and it remains constant throughout the simulation time. To ensure the preservation of an accurate water surface elevation and mass conservation across levels, the bed mesh should be considered as a reference; this means that the HWFV scheme does not allow demoting the local numerical solution to coarsen the mesh to a resolution level lower than the topography refinement, even if the local numerical solution allows for a lower refinement level.

## NUMERICAL RESULTS

This section shows the ability of the new HWFV scheme to solve 1D shallow water problems with fewer cells when comparing to the reference FV scheme, while retaining accuracy, total mass in the system, well-balancing and positivity of water depth. Five well-known benchmark problems are considered for validation of the HWFV scheme. Furthermore, the root mean square error (RMSE) and the relative mass error (RME) are computed. Two of the test cases are idealized dam-break cases over frictionless beds (regular and irregular), the third and fourth test cases consider steady flow over a hump (transcritical flow with shock and supercritical flow) and the fifth test case considers a deeper analysis on the test case of the oscillatory flow in a parabolic bowl.

### Idealized dam-break

The purpose of this case is to test the capability of the HWFV scheme to efficiently and accurately solve the homogenous SWE. The solution is compared to the exact solution, using the RMSE and the maximum error metrics. A 1D channel with a horizontal frictionless bed is considered. The length of the channel is 2,000 m and an imaginary dam is located at 1,000 m from the upstream end. Initially, the upstream water level is 20 m whereas at the downstream end, the water level is 5.0 m. The initial discharge was set to zero in every cell. Boundary conditions, although set to be numerically transmissive, are effectively irrelevant in this case as the propagating wave does not reach the boundary. At the instant of dam failure, water is released producing a shock wave travelling downstream meanwhile a rarefaction wave is formed propagating upstream. The computational domain at the coarse level (*n* = 0) is discretized with 71 uniform cells and the computational model was run up to 40 s after the dam-break. Since the HWFV model allows for up to three levels of mesh refinement, the size of the reference fine uniform mesh is 568 cells. The numerical results are illustrated in Figure 6. The highest level of resolution is noted to be reached at the shock wave and the kink at the tail of the rarefaction wave. The other zones of the rarefaction wave are achieved with the intermediate resolution levels 1 and 2. Meanwhile, at the rest of the domain, where the solution is smooth, the HWFV has retained the baseline coarse level. In this test, the adaptive solution required a maximum of 124 cells (22% of the fine reference mesh).

Further quantitative analysis is performed via calculating the RMSE for both water surface and flow rate, i.e.,
21
22in which AC is the total number of active cells forming the adaptive mesh, , are the numerical results and , are the reference data (from the analytical solutions). Figures 7 and 8 compare the depth's RMSEs and maximum absolute errors (in L-infinity norm), respectively, for the different uniform and adaptive meshes considering up to a maximum resolution level of *n* = 3. Clearly, both error profiles show a decrease in error magnitude with an increase in baseline mesh resolution, which is expected due to grid convergence properties. The variations of the RMSEs resulting from the adaptive HWFV models are bounded between the RMSEs obtained from the coarsest baseline uniform mesh model and the finest uniform mesh models. Such bounding of the adaptive RSMEs is expected because an adaptive mesh scheme at a resolution *n* is set to further allow lower resolution down to the baseline resolution 0. However, since the maximum depth errors are governed by the highest resolution (due to shock presence), their error trends are more logical in this sense, as it can be seen that: (i) any of the adaptive schemes at resolution *n* shows comparable error range to the reference uniform counterpart; and (ii) the variation of the adaptive errors are consistent level-wise. In addition, a more quantitative analysis is considered via tabulating the mean and standard deviation (SD) of the RMSEs (see Table 1). These results show decreasing trend for the means accompanied by a faster reduction in their SDs with more profound refinement levels in the adaptive scheme. Therefore, the HWFV framework performs the adaptivity process without introducing any additional errors to the numerical solution and is able to sensibly decide resolution level in track with the dynamics of the flow. For this test, the adaptive model simulation costs around 78% less, in computational efforts, than the simulation on the finest uniform mesh.

In the following test cases, the highest level will be fixed to *n* = 2 and the focus will be placed on exploring applied aspects that are relevant to hydraulic modelling.

### Steady flow over a hump

This case is employed to test the performance of the current scheme in reproducing a steady flow over non-uniform topography in a frictionless rectangular channel 1 m wide and 25 m long. The analytical solution was supplied by Goutal & Maurel (1997) and the bed reads: 23

#### Transcritical case involving shock

The initial water surface and flow rate per unit width are set to *h* + *z* = 0.33 m and *q* = 0.18 m^{2}/s, respectively. Physical boundary conditions consisted of the steady discharge at the inflow and the initial water level at the outflow. The coarse baseline mesh comprised 50 uniform cells and simulations are noted to converge at around *t* = 170 s. For this test, the adaptive HWFV model has been further implemented and assessed along with the upwind discretization of the source term (Vázquez-Cendón 1999). Figure 9 presents the numerical results of the adaptive scheme (with *n* = 2) as compared to the analytical solutions. The numerical water surface profile shows a very good agreement with the analytical solution for both the upwind and the standard cell-centred source term discretization. In terms of prediction to the flow rate, it seems to be typically improved by the upwind source term discretization apart from the peak caused by the presence of the water jump. Arguably, this is a known deficiency in standard FV schemes and, thus, does not relate to the proposed wavelet-based adaptivity (Garcia-Navarro & Vazquez-Cendon 2000; Toro & Garcia-Navarro 2007)). In this case, the majority of the domain features required coarse resolution level except at the hump, which dictated local level 1 of refinement from the onset and at the discontinuities (i.e., starting kink of the transcritical transition and shock) where the level was refined to highest.

#### Supercritical flow case

To ensure that the disturbances observed in the discharge in the previous test case (Figure 9) are not induced by adaptivity over the local cells (see the section ‘MRA’), the supercritical flow case over a hump is also considered. Channel geometry and bed topography are identical to the previous test case, but the unit inflow rate and elevation water surface at the upstream of the channel are set to *h* + *z* = 2 m and *q* = 25.0567 m^{2}/s*,* respectively. Herein, both of these physical values are used as steady state inflow boundary conditions, whereas a free outlet is numerically set. The simulation time is 10 s and 50 cells are chosen for the baseline coarse mesh. The results are shown in Figure 10. The constant flow rate and surface water profile compared with the analytical solution are well captured. Again, the mesh refinement was obtained only in the regions where both bed elevation and flow are varying. However, no artefacts are noted in the prediction of the discharge for both the upwind and cell-centred discretization. Consequently, the two test cases demonstrate the good performance of the adaptive shallow water flow model in reproducing well-balanced steady flows over topography, and in performing selection of resolution levels in relevance with flow and topographic regions.

### Dam-break over a triangular hump

This test case is employed to verify the capability of HWFV to preserve the total mass in the system in order to ensure that the adaptivity process does not introduce or lose mass even in the presence of shocks and wet/dry fronts. A hypothetical dam-break over a triangular hump test is set up (see Figure 11).

The length of the horizontal flume is 38 m and a dam is located at 15.5 m from the upstream end. A reservoir with a water surface elevation of 0.75 m is located upstream from the dam and the rest of the domain is dry. Mesh resolution at the coarse level consisted of 13 cells. Reflective boundary conditions are set at both ends of the channel. At *t* = 0 s, the dam is assumed to fail causing violent wave propagation; namely, the wetting front rushes into the floodplain, overtops and interacts with the obstacle creating a reflected wave that will again be reflected by the boundary walls. Since the system is closed, the initial mass of water should be conserved. During the simulation, the total mass of water at time *t* () is computed and compared with the total initial discrete mass () according to the RME below:
24

The results in Figure 12 show that the adaptive HWFV models conserve the initial total mass in the system and the magnitude of RME is within the range of machine precision () throughout the entire computational time. However, since in the HWFV scheme the mesh resolution is not fixed, the real physical mass () is also used, as a reference, to show the ability of HWFV to preserve the same amount of mass as compared to the corresponding uniform mesh size; this has been done according to Equation (25) below: 25

In Figure 13, the results show that the RME profiles for both the adaptive mesh levels and their uniform mesh counterparts are the same. This means the HWFV is not introducing any additional mass error beyond the capability of the discretization relative to the fine reference uniform scheme.

### Oscillatory flow in a parabolic bowl

This case is well known and recognized as a challenging test case for numerical models because it involves both moving wet/dry interfaces and it has an uneven topography. It is selected here to study accuracy and mesh convergence abilities of the adaptive HWFV scheme and to explore the sensitivity of the HWFV model, i.e., in performing the adaptivity process, considering various choices for the threshold value parameter and for the baseline coarsest mesh.

The bed is described by with constants and . It consists of an oscillatory flow taking place inside a parabolic bowl. The transient analytical solution was proposed by Thacker (1981):
26
27where *B* = 5 is a constant value and is the frequency. Under these conditions the oscillation period is *T* = 1,345.94 s. The case was simulated on the domain [−5,000* _{m}*; 5,000

*] using different computational mesh cells at coarse level (). Simulations were run up to 1.5*

_{m}*T*. Boundary conditions are irrelevant because the flow never reaches the boundaries. They were set as transmissive boundaries.

#### Threshold sensitivity

To understand the effect of the threshold value parameter on the adaptivity process, a baseline mesh with = 40 cells is fixed, while considering the following threshold values: = 0.0, 0.001, 0.01, 0.1 and 1.0. In Figure 14, the numerical water surface profiles at *t* = 275 s (*T*/5) and *t* = 2,020 s (1.5 *T*) are compared with the analytical solution. As seen Figure 14, the HWFV scheme refines in the region where the wet/dry interface is moving through. Meanwhile, other parts of the domain stay at the coarsest and the intermediate levels of refinement. Furthermore, it can be seen that varying the threshold value leads to different refinement patterns. In particular, it is shown that, as increases fewer cells are refined to higher levels during the simulation, since smaller detail coefficients are selectively omitted. In Figure 14(a), it is clear that the adaptive HWFV scheme activates all the detail coefficients when = 0.0 and all the computational cells go to the highest level, thus resulting in a uniform mesh. Figure 14(b) shows an almost uniform mesh prediction when = 0.001. Hence, it is not reasonable, in terms of efficiency, to use a threshold value < 0.001. A sensible refinement is obtained with = 0.01 (Figure 14(c)). For = 0.1 and = 1.0 (shown in Figure 14(d) and 14(e), respectively) poor predictions at the wet/dry interfaces appear, which may be caused by the fact that a large threshold value leads to omitting some small detail coefficients, relevant to the wet/dry front, during the promotion process.

#### Baseline meshes

The choice of a baseline mesh with = 40 is rather arbitrary, and it might be already too fine, or on the contrary, never allow for a fine enough mesh for this test case. This may affect the performance of the adaptivity process. Therefore, the influence of the baseline mesh should be studied. Several baseline meshes at coarse level = 20, 40, 80, 160 and 320 are introduced to address this. The same settings of the threshold value as reported in Figure 14 are used. The evolution of the number of active cells is presented in Figure 15. The results confirm that all considered combinations of and are able to perform the adaptive solutions. Moreover, when refining the baseline mesh, the HWFV requires a reduction of the threshold value to better perform adaptivity process. This is due to the fact that most of the flow region results in rather smooth solutions; therefore the value of the detail coefficients is small. In Figure 15, reduces as the baseline mesh is refined, regardless of the varying threshold. However, in Figure 15(d) and 15(e) for = 0.1 and = 1.0, the magnitude of is relatively the same and with values bounded between 1.0 and around 2.5. These values are less when compared to other threshold values, but they strongly influence the quality of the numerical solution. Notably, with = 0.01, regardless of , optimal results are obtained, in comparison to other threshold values. The value of is bounded between around 1 to 4. This case shows a particular trend of how the pattern of active cells varies with . This trend indicates that = 0.01 is the most sensitive to . Owing to this sensitivity, this threshold value allows for a wide, automatic response of the adaptive process (contrary to, for example = 0.1) and therefore is likely to be the best to perform adaptivity in a prompt way. It is clear that an inefficient adaptive process is obtained with = 0.001 except in Figure 15(e), when a very fine baseline mesh is used.

#### Mesh convergence

Mesh convergence is also studied in terms of the -norm as defined in Equation (28). Simulations were performed up to 2,020 s and for analysis the *t* = 270 s (T/5) and *t* = 2,020 s (1.5 T) are selected. Several baseline meshes at coarse level = 10, 20, 30, 40, 80, 160 and 320 are introduced with using the same threshold values as defined previously. It can be seen in Figure 16 that the behaviour of the -norm is asymptotic regardless of and . The uniform mesh ( = 0.0) is taken as a reference curve for comparison. Each point within a curve is associated with each baseline mesh. The difference between the convergence curve for = 0.001 and the reference are relatively small. For = 0.01 the convergence is slightly better than all as shown in Figure 16(a) and 16(b). The large error obtained with = 1.0 and = 20, 30 cells and this is due to too few cells being activated during the adaptivity process (over-filtering). Thus the quality of the numerical solution is affected. Nevertheless, the magnitude of the error becomes smaller as increases but still remains larger than the error of the reference curve. Furthermore, the better results are obtained for = 0.1 but with sensible differences compared to = 1.0. The magnitude of the -norm is increased from Figure 16(a) to 16(b). This is merely because of numerical diffusion which is an anticipated issue for FV first-order schemes. In Figure 17, the same analysis as the one reported in Figure 16 is considered to illustrate the relative performance of CPU time (RPCPU). Figure 17(a) shows the ratio of the CPU times of the adaptive schemes to the CPU time obtained from the associated fine uniform reference schemes (i.e., = 0.0); while Figure 17(b) further describes the normalized CPU times, which are obtained by dividing all CPU time by the maximum one (i.e., *N _{0}* = 320, = 0.0). The results show clearly that when > 0.0, less time is required for achieving a simulation as the baseline mesh

*N*density increases. They also show that the efficiency of the adaptive HWFV schemes is near their equivalent uniform mesh FV schemes when the baseline mesh has a size

_{0}*N*≤ 40 and despite the choice of .

_{0}For = 0.001, the RPCPU and normalized CPU time are noted to be inefficient despite the choice of the baseline mesh *N _{0}*. In contrast, for = 0.01, 0.1 and 1.0 they start to significantly decrease in proportion with an increase in the density of the baseline mesh

*N*. However, for = 0.1 and 1.0, the RPCPU tend to remain close for

_{0}*N*≥ 40; whereas, with = 0.01 the RPCPU showed consistent decrease in line with the refinement of the baseline mesh

_{0}*N*; this suggests that a threshold value of = 0.01 enables best selection among the magnitude of the detail coefficients, and so allows optimal efficiency and accuracy in the context of the proposed HWFV model for a baseline mesh of around 40–100 cells. 28

_{0}## CONCLUSIONS

Adaptive mesh refinement schemes are useful tools to efficiently model various scales of shallow water flow. A new adaptive formulation is proposed that combines the Haar wavelets with the FV method (HWFV). The appeal of the formulation can be easily exploited to drive spatial resolution adaptation from the solution itself according to a single threshold value *ɛ*. A series of numerical tests has been performed, considering issues of well-balanced property, convergence of scheme, sensitivity relating to different choice of the thresholds values and ability to treat wet/dry fronts.

Numerical evidence confirms that the proposed HWFV scheme can accurately solve the SWE with source term. Adaptive solutions are shown to be mass conservative. Notably, the new model is proven to be able to self-decide appropriate resolution levels following the dynamics of the flow, including shocks, wet/dry fronts and the presence of non-flat topography. Future research will consist of integrating the friction source term in a well-balanced manner, exploring sensitivity issues in relation to increasing the resolution levels with varying threshold values and extending the approach to 2D.

## ACKNOWLEDGEMENTS

The first author acknowledges the support of the Kurdistan Regional Government, Ministry of High Education and Scientific Research. The research is further supported by the UK Engineering and Physical Sciences Research Council (grant ID: EP/K031023/1) and by the Pennine Water Group platform grant (grant ID: EP/I029346/1).

- First received 2 March 2015.
- Accepted in revised form 9 June 2015.

- © 2015 The Author(s)

This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 3.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/licenses/by-nc-nd/3.0/).

Sign-up for alerts