Do you know if the IV or key value changed ?
Ie. Did you copy the key file from the old server and use the old servers
config or the old servers IV value ?
> On 16 Jul 2020, at 00:43, Marcelo Machado wrote:
Piler should drop the usage of al those outdated libraries and use
> On 08 May 2019, at 10:52, Katterl Christian wrote:
> In at least my case, this does not seem to work.
> BR, Christian
> Von: Janos SUTO
> Gesendet: Montag, 6. Mai 2019 11:33
Ive done more than 50 multi tb migrations.
Pretty much create a tar and use rsync with tweaked encryption settings to
speed up the transfer.
Nowdays we just do a zfs sync of the pool (disk storage) to the new servers
zfs, it took a little over 7 hours to transfer 20tbytes
Basically you can boot into rescue mode and repair the boot loader to fix your
original system, i.e. get it booting.
Another alternative would be to mount the "drives" on a working os to recover
the piler config and vector values, download the message store and try and make
a database backup.
The left behind attachments are deduplicated attachments which could be needed
for other emails.
We have a commercial tool which will scan all the attachments and check if they
Also check your /var/log as they can grow insanely large if you have debugging
Please watch your language.
What do you want to achieve by managing the folders ?
_ eXtremeSHOK.com _
> On 30 Jun 2016, at 1:38 PM, Joe Rady wrote:
> i´ve managed to get some results now. excellent!
> Is there a way to
Please list the steps
Sent from my iPhone
> On 03 Mar 2016, at 7:11 PM, Tim Stumbo wrote:
> Edwin, I can send you the steps I took to fix my dates if you like.
>> On Thursday, March 3, 2016, Janos SUTO wrote:
>> Hello Edwin,
In exchange 2010, you use the following format for the username
to log into the shared mailbox
Sent from my iPhone
On 21 Jul 2015, at 9:15 PM, Joern Quillmann, kuehlhaus AG
I'm running piler on Ubuntu 15.04
Your over thinking the whole thing with trying to scale a single instance.
Create multiple virtual machines on something like open stack, etc.
1 vm per a client /company, scale the storage and resources as needed.
Client leaves, kill the vm.
This is how we do ours, far less maintenance and