Hi Yves, In terms of further simplifying the computation, my understanding is that if the scale of your simulation is around 50,000 or 100,000 particles, then saving time by partially "relaxing" the simulation domain is probably not necessary. This is because the number of bodies is low anyway, and further reducing the "effective" number of active simulation bodies might further blur the performance edge of a GPU-based tool. However, letting the simulation cover a longer simulation time using fewer time steps should always help.
I feel the best approach is to dynamically select the time step size. If you know during certain periods of the simulation, everything is relatively "dormant", then you can use large time step sizes during it, using the method *UpdateStepSize*. You can change it back using the same method if you believe a collision that requires fine time steps to resolve is about to happen. If you still wish to "relax" a subset of the clumps in the simulation, then perhaps family-based magics are the way to go. If you believe some clumps are effectively fixed in place during a period, then you can again, freeze them using the approach I discussed above. This indeed saves time because those clumps will simply not have contacts among themselves. You could also massage the material associated with a subset of the clumps using the method *SetFamilyClumpMaterial*. However, I have to mention that different material properties hardly make any impact on computational efficiency. Soft materials with more damping could allow for a more lenient time step size selection, but the step size is still determined by the "harshest" contact that you have to resolve. The ultimate tool is of course the custom force model. If you can design a model that is fast to solve and accurate enough for you, and potentially resolves different parts of the simulation domain differently like you wished, that's probably the best. For a starter, if you do not need friction, then try calling *UseFrictionlessHertzianModel()* before system initialization to use the frictionless Hertzian contact model. And you can develop even cheaper and more specific models after that. Thank you, Ruochun On Friday, February 2, 2024 at 11:31:02 PM UTC+8 [email protected] wrote: > Hello Ruochun, > > Thank you for your answer. > > That makes a lot of sense, especially since, in my case, I know how many I > need from the beginning. > Your proposed method is quite smart; I will try to implement it. I will > run some tests and come back here to report the difference. > > Something else I was also wondering is there any way to kind of "relax" > the problem in some parts of the geometry? The bottom of the geometry will > not see large velocities and strong changes once few spheres have covered > it, and that applies to the layers above later in the simulation. > If that is a possibility somehow, I am expecting this to be a large time > saver as well. > > Thank you, > Yves > > On Thursday, February 1, 2024 at 3:35:11 AM UTC-5 Ruochun Zhang wrote: > >> Hi Yves, >> >> I only had a brief look at the script. So what you needed is to add more >> spherical particles into the simulation, one by one, and I assume you need >> to do this thousands of times. >> >> The problem is that adding clumps, or say *UpdateClumps()*, is not >> designed to be called too frequently, and it's really for adding a big >> batch of clumps. When you call it, you need to sync the threads (perhaps >> the cost of one round of contact detection), then the system goes through a >> process that is similar to initialization (no just-in-time compilation, but >> still a lot of memory accesses). Although I would expect it to be better >> than what you measured (6.2s), maybe you also included the time needed to >> advance a frame in between---I didn't look into that much detail. >> >> In any case, it's much better to get rid of adding clumps. If you know >> how many you will have to add eventually, then initialize the system with >> them in, but frozen (in a family that is fixed and has contacts disabled >> with all other families). Track these clumps using a tracker (or more >> trackers, if you want). Then each time you need to add a clump, use this >> tracker to change a clump in this family (using offset, starting from >> offset 0, then moving on to 1, 2... each time) to be in a different family >> so it becomes an "active" simulation object. Potentially, you can SetPos >> this clump before activating it. This should be much more efficient, as a >> known-sized simulation should be. As for material properties, I don't think >> they have significant effects here. >> >> Let me know if there is any difficulty implementing it, >> Ruochun >> >> On Wednesday, January 31, 2024 at 1:27:17 AM UTC+8 [email protected] >> wrote: >> >>> Hello, >>> >>> I am working on a problem which involves dropping one sphere at a time >>> in a geometry from its top in DEME-Engine. The geometry can have multiple >>> hundreds of thousands of spheres poured in it, so I would need something >>> efficient. The constraint is that I have to always drop the sphere with a >>> null velocity from the same spot. >>> >>> The problem I have is that it is very slow. >>> >>> I made an example attached, where I fast-forward to 50,000 spheres in >>> the geometry, then drop them one by one. When measuring the performance >>> (see log attached), I obtain something like 6.2 seconds per drop. The >>> overhead I measured, when starting from 0, was ~0.2s, so it gives >>> 6/50000=120e-6 s/sphere. If I adjust perfectly the step size to have a >>> drop, that means that to fill the geometry with, says 500,000 spheres, it >>> would take me around 6 months of computation to complete. >>> >>> Therefore, I write to see if: >>> >>> 1. Something is wrong my script. >>> 2. Some values can be safely relaxed. The Young's modulus and other >>> sphere parameters were taken from a paper, so I would prefer not to >>> touch >>> it. The time step seems already fairly high in my example. >>> 3. If there are techniques that could be applied to lower the >>> computation for this kind of common problem. >>> >>> Thank you! >>> >> -- You received this message because you are subscribed to the Google Groups "ProjectChrono" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/projectchrono/467f7b31-0a76-4a65-9217-7045c7055622n%40googlegroups.com.
