Being new to GPUSPH, I’ve been focusing on the provided example test cases, specifically the “WaveTank”. To measure my system performance and capabilities, I ran a bunch of test cases while decreasing the inter-particle spacing. From my past experience with Eulerian based models, I assumed this would be analogous to a grid convergence test. I don’t know the depths of GPUSPH’s numerical framework so I don’t know if my assumption is valid.
Attached you will find a time-series plot from two different gauges, where the different line colors correspond to different inter-particle spacing. One of the gauges is near the wave maker, the other is down stream. The main thing I want to focus on in this question is why do all cases appear to deviate from the initial still water level before the wave reaches the gauge? Why does the smallest inter-particle spacing (0.005) case show a significant drop in surface elevation as compared to the other test cases?
I did not change anything else in each case. The only thing I changed was the inter-particle spacing. After studying the plots and watching the animations, it seems that something is wrong with the initial conditions. The time-series should ideally be flat with an elevation reflecting the still water level. Only after the wave reaches the gauge should the gauge time-series start to change. I also noticed in the animation, the particles seem to be distributed in vertical layers over the domain, in which they “fall into place” after the first time step? It’s hard for me to tell, because I don’t fully know how the initialization works.
Can someone please help me understand what is going on and how I can correct it? Again, I would expect to see the gauge time-series converge and reflect the still water level until the wave hits the gauge.
As far as model parameters…looking specifically at the “WaveTank” source code, I think the still water level is defined by the parameter H = 0.45. Again, no other parameters are changed in the source code and the time step is adaptive.