Hello,
I find this information very interesting:
> In The Upcoming Scale Release there will be a CES-S3 (Baremetal) Version of 
> these s3-Endpoints. And this is promissing already for us.

any ideas when that release will be available? I heard something about Q1/2 
2024, and I am truly looking forward for some serious S3 on gpfs solution.

Best regards,
Patryk Belzak.

HPC sysadmin,
Wroclaw Centre for Networking and Supercomputing.

On 24/02/29 06:46AM, Andi Nør Christiansen wrote:
>    MinIO is not supported on GPFS, just an fyi. Had a long discussion with
>    them about that as we also had CES S3 that we wanted to move away from.
> 
>    We went in the same direction as Renar.
> 
>    Best Regards
> 
>    Andi Nør Christiansen
>    IT Solution Specialist
> 
>    Mobile +45 23 89 59 75   
>    E-mail [email protected] 
>    Web    www.b4restore.com 
> 
>    [IMG] [IMG] [IMG] [IMG]
> 
>    -----Original Message-----
>    From: gpfsug-discuss <[email protected]> On Behalf Of
>    Stephan Graf
>    Sent: 28. februar 2024 18:02
>    To: [email protected]
>    Subject: Re: [gpfsug-discuss] storage-scale-object
> 
>    Hi Christoph,
> 
>    we are running CES-Object in production and are facing the same issue that
>    this solution is deprecated. Our security division sending us scan reports
>    that the server are running an python version which is not in support any
>    more (and we should upgrade).
>    So we are looking for another solution which looks to be minIO (on top of
>    a scale file system).
> 
>    Stephan
> 
>    On 2/28/24 17:37, Grunenberg, Renar wrote:
>    > Hallo Daniel, Christoph,
>    > currently IBM make a not good job on that, but it's light on that topic.
>    > Currently you can use Swift/s3 on 5.1.9.1 with a backlevel scale (swift)
>    version. But i don't recommend this.
>    > Since 5.1.3.1 you can use IBM Spectrum Scale DAS S3 -Service
>    > (containerized s3 endpoints on noobaa-based operator called Openshift
>    Data Foundation) But this will be after 5.1.9.1 DAS also deprecated. (We
>    use this Version today).
>    > In The Upcoming Scale Release there will be a CES-S3 (Baremetal) Version
>    of these s3-Endpoints. And this is promissing already for us.
>    >
>    >
>    >
>    > Renar Grunenberg
>    > Abteilung Informatik - Betrieb
>    >
>    > HUK-COBURG
>    > Bahnhofsplatz
>    > 96444 Coburg
>    > Telefon:  09561 96-44110
>    > Telefax:  09561 96-44104
>    > E-Mail:   [email protected]
>    > Internet: www.huk.de
>    > ======================================================================
>    > = HUK-COBURG Haftpflicht-Unterstützungs-Kasse kraftfahrender Beamter
>    > Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr.
>    > 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg
>    > Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin.
>    > Vorstand: Klaus-Jürgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans
>    Olav Herøy, Dr. Helen Reck, Dr. Jörg Rheinländer, Thomas Sehn, Daniel
>    Thomas.
>    > ======================================================================
>    > = Diese Nachricht enthält vertrauliche und/oder rechtlich geschützte
>    > Informationen.
>    > Wenn Sie nicht der richtige Adressat sind oder diese Nachricht
>    > irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
>    vernichten Sie diese Nachricht.
>    > Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht
>    ist nicht gestattet.
>    >
>    > This information may contain confidential and/or privileged information.
>    > If you are not the intended recipient (or have received this
>    > information in error) please notify the sender immediately and destroy
>    this information.
>    > Any unauthorized copying, disclosure or distribution of the material in
>    this information is strictly forbidden.
>    > ======================================================================
>    > =
>    >
>    > -----Ursprüngliche Nachricht-----
>    > Von: gpfsug-discuss <[email protected]> Im Auftrag von
>    > Christoph Martin
>    > Gesendet: Mittwoch, 28. Februar 2024 17:24
>    > An: gpfsug main discussion list <[email protected]>
>    > Betreff: Re: [gpfsug-discuss] storage-scale-object
>    >
>    > Hi Daniel,
>    >
>    > Am 28.02.24 um 17:06 schrieb Kidger, Daniel:
>    >> I would assume it is because most customers want to use the S3
>    Interface rather than the Swift one?
>    >
>    > And Openstack/Swift is very complex. So I can understand why IBM gets
>    > rid of it.
>    > But 5.1.9.1 has no object support at all. There is no S3 interface.
>    >
>    > Christoph
>    > _______________________________________________
>    > gpfsug-discuss mailing list
>    > gpfsug-discuss at gpfsug.org
>    > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
> 
>    --
>    Stephan Graf
>    HPC, Cloud and Data Systems and Services Juelich Supercomputing Centre
> 
>    Phone:  +49-2461-61-6578
>    Fax:    +49-2461-61-6656
>    E-mail: [email protected]
>    WWW:    http://www.fz-juelich.de/jsc/
>    
> ---------------------------------------------------------------------------------------------
>    
> ---------------------------------------------------------------------------------------------
>    Forschungszentrum Juelich GmbH
>    52425 Juelich
>    Sitz der Gesellschaft: Juelich
>    Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
>    Vorsitzender des Aufsichtsrats: MinDir Stefan M?ller
>    Geschaeftsfuehrung: Prof. Dr. Astrid Lambrecht (Vorsitzende), Karsten
>    Beneke (stellv. Vorsitzender), Dr. Ir. Peter Jansens
>    
> ---------------------------------------------------------------------------------------------
>    
> ---------------------------------------------------------------------------------------------







> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Reply via email to