That should be fine. No need for 'empty' fields to be included in the 
header line and data lines, but in this case it is probably the 
simplest/quickest approach. Just be very careful that you don't end up with 
any missing or extra commas as you will likely then put the wrong data in 
the wrong field.

Gary
On Sunday, 16 March 2025 at 05:57:55 UTC+10 Andrew McGinnis wrote:

> Via a dump of every column in the table that exists, whether I'm using it 
> or not, the csv content could be:
>
>
> "dateTime","usUnits","interval","altimeter","appTemp","appTemp1","barometer","batteryStatus1","batteryStatus2","batteryStatus3","batteryStatus4","batteryStatus5","batteryStatus6","batteryStatus7","batteryStatus8","cloudbase","co","co2","consBatteryVoltage","dewpoint","dewpoint1","ET","extraHumid1","extraHumid2","extraHumid3","extraHumid4","extraHumid5","extraHumid6","extraHumid7","extraHumid8","extraTemp1","extraTemp2","extraTemp3","extraTemp4","extraTemp5","extraTemp6","extraTemp7","extraTemp8","forecast","hail","hailBatteryStatus","hailRate","heatindex","heatindex1","heatingTemp","heatingVoltage","humidex","humidex1","inDewpoint","inHumidity","inTemp","inTempBatteryStatus","leafTemp1","leafTemp2","leafWet1","leafWet2","lightning_distance","lightning_disturber_count","lightning_energy","lightning_noise_count","lightning_strike_count","luminosity","maxSolarRad","nh3","no2","noise","o3","outHumidity","outTemp","outTempBatteryStatus","pb","pm10_0","pm1_0","pm2_5","pressure","radiation","rain","rainBatteryStatus","rainRate","referenceVoltage","rxCheckPercent","signal1","signal2","signal3","signal4","signal5","signal6","signal7","signal8","snow","snowBatteryStatus","snowDepth","snowMoisture","snowRate","so2","soilMoist1","soilMoist2","soilMoist3","soilMoist4","soilTemp1","soilTemp2","soilTemp3","soilTemp4","supplyVoltage","txBatteryStatus","UV","uvBatteryStatus","windBatteryStatus","windchill","windDir","windGust","windGustDir","windrun","windSpeed","pm4_0","pm4_0out","pm2_5out","pm1_0out","pm10_0out","co2out"
>
> 1741612440,1,1,,,,,1.0,,,1.0,,,,,,,,,,,,58.0,,,58.0,,,,,75.02,,,77.36,,,,,,,,,,,,,,,,,,,,,,,,,,,,,238.51020260006104,,,,,,,,,,,,,
> 0.00999881889763705
> ,,0.0199976377952741,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
>
> where the larger font size one is the rain amount that I'm inserting. 
> Then one observation line for each record period that I need to change 
> (that way, no change is required to the column name line for periods that 
> have a different mix of observations with data)?
>
> Only 8 observations got recorded in that period I have above (I have more 
> than 8 XXXbatteryStatus observations alone), but at this point I'm just 
> seeking to add the rain observation.
>
> On Fri, Mar 14, 2025 at 9:41 PM gjr80 <[email protected]> wrote:
>
>> See comments below.
>>
>> Gary
>>
>> On Saturday, 15 March 2025 at 03:47:33 UTC+10 Andrew McGinnis wrote:
>>
>> If you want to use weectl import to import data into existing archive 
>> records, each record in your import source data (in your case your CSV 
>> data) needs to include the existing data for all fields as well as the 
>> updated rainfall data. You then need to use the (as yet undocumented) 
>> --update command line option when running weectl import. 
>>
>>
>> So rather than just the records I'm *missing*, I need to go from the 
>> first missing record to the last missing record, inclusive of all 
>> in-between records? I can easily do that, just not 100% on whether it's 
>> required. 
>>
>>
>> No, you don't need 'in between records'. Your import data should only 
>> include the records you intend to update/alter, you do need the data from 
>> the other fields of the records you are updating/importing. Perhaps an 
>> example might make it clearer. Say your archive table contains the 
>> following:
>>
>> dateTime    outTemp  inTemp  rain  outHumidity
>> 1742000400  23.1     20.1     1.0    80
>> 1742000100  23.2     20.1     2.0    81
>> 1741999800  23.2     20.1     5.0    84
>> 1741999500  23.3     20.2     0.0    83
>> 1741999200  23.5     20.2     0.8    83
>> 1741998900  23.5     20.2     0.6    84
>> 1741998600  23.4     20.1     0.6    82
>> 1741998300  23.3     20.1     0.8    83
>> 1741998000  23.3     20.2     0.0    84
>> 1741997700  23.3     20.1     1.0    82
>>
>> Lets say you want to update the rain value in records timestamped 
>> 1741999500 and 1741998000 (ie the 0.0 values) and your new rain values 
>> (per archive period values) for these two records are 0.2 and 0.4 
>> respectively. So you might start with your import data as follows:
>>
>> dateTime      rain
>> 1741999500     0.2
>> 1741998000     0.4
>>
>> If you import this data using the --update command line option (or 
>> without the --update command line option if you deleted the two archive 
>> table records concerned) your archive table will be:
>>
>> dateTime    outTemp  inTemp  rain  outHumidity
>> 1742000400  23.1     20.1     1.0    80
>> 1742000100  23.2     20.1     2.0    81
>> 1741999800  23.2     20.1     5.0    84
>> 1741999500                    0.2    
>> 1741999200  23.5     20.2     0.8    83
>> 1741998900  23.5     20.2     0.6    84
>> 1741998600  23.4     20.1     0.6    82
>> 1741998300  23.3     20.1     0.8    83
>> 1741998000                    0.4    
>> 1741997700  23.3     20.1     1.0    82
>>
>> Each row in your import data is considered to be an archive record in 
>> itself and any fields that are not included in your import data will be 
>> empty. To keep the rest of the existing data in the records being updated 
>> (in this case outTemp, inTemp and outHumidity), your import data needs 
>> to include values for outTemp, inTemp and outHumidity. Your import data 
>> would be:
>>
>> dateTime    outTemp  inTemp  rain  outHumidity
>> 1741999500  23.3     20.2     0.2    83
>> 1741998000  23.3     20.2     0.4    84
>>
>> Importing this data will give you:
>>
>> dateTime    outTemp  inTemp  rain  outHumidity
>> 1742000400  23.1     20.1     1.0    80
>> 1742000100  23.2     20.1     2.0    81
>> 1741999800  23.2     20.1     5.0    84
>> 1741999500  23.3     20.2     0.2    83
>> 1741999200  23.5     20.2     0.8    83
>> 1741998900  23.5     20.2     0.6    84
>> 1741998600  23.4     20.1     0.6    82
>> 1741998300  23.3     20.1     0.8    83
>> 1741998000  23.3     20.2     0.4    84
>> 1741997700  23.3     20.1     1.0    82
>>
>> Which I suspect is the desired result. Of course this is a very simple 
>> and unrealistic example but hopefully you get the idea.
>>  
>>
>>
>> Alternatively, you could avoid using the --update command line option by 
>> first deleting the records to be imported from the WeeWX database and then 
>> importing your merged/updated CSV data.
>>
>>
>> If you'll save me the research, how do I delete specific records/periods, 
>> either a single period record, or a range? I just recently noticed that the 
>> soilMoist1 observation, on my weewx testing install, recorded about 84 
>> hours of very wrong values, that my production weewx install didn't (both 
>> getting the data from the same mqtt topic). I know how to drop entire 
>> observation columns via weectl database drop-columns NAME, but in this 
>> case it's just range of periods within the column that need purging. Or is 
>> it as simple as, zeroing out the values in a range via sqlite> update 
>> archive set soilMoist1=0.0 where dateTime > {startepochtime} and dateTime 
>> <= {endepochtime]; ?
>>
>>
>>  
>>  In terms of deleting records I use the sqlite3 utility to allow me to 
>> use SQL commands to interact with the database. Some people use GUI based 
>> sql editors to manipulate the database; I don't so I can't help you there. 
>> In terms of sqlite3 the following commands can be used to delete archive 
>> table records:
>>
>> $ sqlite3 /var/lib/weewx/weewx.sdb
>> sqlite> DELETE FROM archive WHERE dateTime=1741999500;
>> sqlite3> .q
>> $
>>
>> In this case we deleted the archive record timestamped 1741999500. 
>>
>> To delete a range of archive records by timestamp use something like:
>>
>> sqlite> DELETE FROM archive WHERE dateTime>=1741969500 AND 
>> dateTime<=1741999500;
>>
>> which would delete all archive records timestamped from 1741969500 
>> to 1741999500 inclusive.
>>
>> Note that you may need to install the sqlite3 utility on your system 
>> using something like (for debian based systems):
>>
>> $ sudo apt install sqlite3
>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "weewx-user" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/weewx-user/1KX3-8lbU34/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> [email protected].
>>
> To view this discussion visit 
>> https://groups.google.com/d/msgid/weewx-user/4506f570-fb2c-40f2-95ab-a98c02a59011n%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/weewx-user/4506f570-fb2c-40f2-95ab-a98c02a59011n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"weewx-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/weewx-user/6b7b89a9-5703-4ae9-8c01-bdaa310ff7can%40googlegroups.com.

Reply via email to