On Wed, 26 Dec 2007, Manoj Rajagopalan wrote: > Does anyone know what the following variables in the fields_chunk > class are used for? I have attempted to understand them first and I have > included my interpretation as doxygen-style comments for some I feel > sure about. > > /// Stores pointers to (complex-valued) fields. > /// E.g.: f[Ex][0] is a pointer to array of real-parts of the Ex field. > /// f[Hy][1] is a pointer to array of imaginary-parts of the Hy field. > double *f[NUM_FIELD_COMPONENTS][2]; > /// ?
This is exactly what the previous comment says. Note that any given pointer may be NULL if the corresponding field component is not stored. e.g. f[*][1] is NULL if the fields are purely real. All of these arrays are for data on the corresponding Yee lattice, of course. So, for example, f[Ex][0][0] (the 0th grid point in the real part of the Ex field) corresponds to a different physical grid location than f[Ey][0][0]. > double *f_backup[NUM_FIELD_COMPONENTS][2]; > /// ? Most of these pointers ar NULL, but in a few cases we make a backup copy of one of the field components. The main case in which we do this is for the flux_in_box commands and similar --- to compute E x H properly, we need E and H at the same time, but this doesn't occur naturally because FDTD uses a leapfrog scheme in which E and H are stored at dt/2 offset time steps. So, in these cases we make a backup copy of H, step it to the other side of E in time, and use the average of the two H fields to get the field at the same time as E. In particular, see the fields::backup_h and fields::restore_h functions in energy_and_flux.cpp, and how they are used. Note that this is not needed or used for flux spectrum calculations, which use the Fourier transform rather than the fields at any individual time steps. > double *f_p_pml[NUM_FIELD_COMPONENTS][2]; > /// ? > double *f_m_pml[NUM_FIELD_COMPONENTS][2]; > /// ? Currently, Meep's PML implementation uses the old-style "split-field" PML (from Berenger's original paper). This means that in the PML regions each field component is split into two fictitious ones, and these arrays store the fictitious split fields. (They are NULL in non-PML chunks.) (At a future date we will switch to a UPML implementation that is more memory efficient, but that code is still in alpha stage and is not yet released.) > double *f_backup_p_pml[NUM_FIELD_COMPONENTS][2]; > /// ? > double *f_backup_m_pml[NUM_FIELD_COMPONENTS][2]; > /// ? Same as f_backup above. > int num_each_direction [3] > /// ? This is a copy of fields_chunk::v::yucky_num(0/1/2). That is, it is the number of pixels in the chunk in the 3 directions (X/Y/Z or whatever, in the order given by volume::yucky_dir). In other words, the field arrays in the chunk are a num_each_direction[0] x num_each_direction[1] x num_each_direction[2] three-dimensional array in row-major order. (Note that e.g. in two dimensions then num_each_direction[0] = 1.) > int stride_each_direction [3] > /// ? This is the stride of the data in the fields arrays in each of the directions 0/1/2, the same as num_each_direction. That is, for a field array pointed to by p (e.g. p = f[Ez][0]), the (i,j,k) coordinate (with an appropriate choice of origin) is stored at p[i*stride[0] + j*stride[1] + k*stride[2]]. If you don't understand the concept of "stride", you probably need to understand how multidimensional arrays are stored. Realize that a "three-dimensional" L x M x N array in row-major order is stored as a contiguous chunk of memory pointed to by some pointer p (e.g. p = f[Ez][0]). In this contiguous chunk, row-major order (Google it) means that the (i,j,k) entry (i = 0..L-1, j = 0..M-1, k=0..N-1) is stored at p[(i*M + j)*N + k]. That means when you increment i by 1, the memory address increments by M*N; when you increment j by one, the memory address increments by N; and when you increment k by one, the memory address increments by 1. Thus, the strides are M*N, N, and 1, respectively. In other words, multidimensional arrays must be stored so that some directions are discontiguous in memory, and thus we must have a stride: the memory offset between consecutive elements along each dimension. > int num_any_direction [5] > /// ? > int stride_any_direction [5] > /// ? The same thing as above, but indexed by a direction (X/Y/Z/R/P) instead of 0/1/2. We could probably have not had these variables, and just called the corresponding member functions of volume when needed (they are copied into local variables before being used by any inner loops anyway). Oh well. > polarization *olpol For dispersive materials, the auxliary polarization field is time-stepped using a leap-frog scheme that requires us to store it at two time steps; olpol points to the polarization (pol) from the previous time step. At each time step, the data in olpol is replaced by the polarization at the new timestep, and then the pol and olpol pointers are swapped. Regards, Steven G. Johnson _______________________________________________ meep-discuss mailing list [email protected] http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

