Re: [CentOS] CentOS 6 fix sudo CVE-2021-3156

2021-01-27 Thread Maxim Shpakov
I think it is just not released yet. OL6 is on support track still

On Wed, 27 Jan 2021 at 12:33, Simon Matter  wrote:

> > Hi
> >
> > You can use oracle linux 6 , it is still supported (till March 2021)
>
> But I don't find this sudo update or the recent openssl update in their
> repos? Is this for paying customers only or what?
>
> Simon
>
> >
> > On Wed, 27 Jan 2021 at 09:38, Gionatan Danti  wrote:
> >
> >> Hi all,
> >> do you know if a fix for sudo CVE-2021-3156 is available for CentOS 6?
> >>
> >> While CentOS 6 is now supported anymore, RedHat has it under its
> >> payedsupport agreement (see:
> >> https://access.redhat.com/security/vulnerabilities/RHSB-2021-002).
> >>
> >> So I wonder if some community-packaged patch exists...
> >> Thanks.
> >>
> >> --
> >> Danti Gionatan
> >> Supporto Tecnico
> >> Assyoma S.r.l. - www.assyoma.it
> >> email: g.da...@assyoma.it - i...@assyoma.it
> >> GPG public key ID: FF5F32A8
> >> ___
> >> CentOS mailing list
> >> CentOS@centos.org
> >> https://lists.centos.org/mailman/listinfo/centos
> >>
> > ___
> > CentOS mailing list
> > CentOS@centos.org
> > https://lists.centos.org/mailman/listinfo/centos
> >
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6 fix sudo CVE-2021-3156

2021-01-27 Thread Maxim Shpakov
Hi

You can use oracle linux 6 , it is still supported (till March 2021)

On Wed, 27 Jan 2021 at 09:38, Gionatan Danti  wrote:

> Hi all,
> do you know if a fix for sudo CVE-2021-3156 is available for CentOS 6?
>
> While CentOS 6 is now supported anymore, RedHat has it under its
> payedsupport agreement (see:
> https://access.redhat.com/security/vulnerabilities/RHSB-2021-002).
>
> So I wonder if some community-packaged patch exists...
> Thanks.
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.da...@assyoma.it - i...@assyoma.it
> GPG public key ID: FF5F32A8
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 7.x Russian certification

2017-08-18 Thread Maxim Shpakov
Hi

only rhel is certified

http://www.linuxcenter.ru/shop/sertified_fstek/RedHat/

18 авг. 2017 г. 11:50 пользователь "Gorazd" 
написал:

> Dear all,
>
> does anyone have any idea where to get a list of all CentOS 7.x versions
> that are certified to be used within Russia.
> My guess is that organizations, such as FSTEK, TR-CU declares those
> versions/distributions of CentOS that are certified to be used with
> commercial projects in Russia.
>
> Anyone with some information on this, plase advice, I would be very
> grateful.
>
> Regards,
> Gorazd
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] anaconda, kickstart, lvm over raid, logvol --grow, centos7 mystery

2014-08-13 Thread Maxim Shpakov
Just want to mention that this behaviour is already known bug

https://bugzilla.redhat.com/show_bug.cgi?id=1093144#c7

2014-07-31 12:01 GMT+03:00 Maxim Shpakov :
> Hi!
>
> I can confirm this.
>
> --grow on LVM partition is broken for raid+lvm kickstart installs.
>
>
> bootloader --location=mbr --driveorder=sda,sdb --append="net.ifnames=0
> crashkernel=auto rhgb quiet"
> zerombr
> clearpart --all --drives=sda,sdb --initlabel
>
> part raid.1 --asprimary --size=200 --ondisk=sda
> part raid.2 --size=1 --grow --ondisk=sda
> part raid.3 --asprimary --size=200 --ondisk=sdb
> part raid.4 --size=1 --grow --ondisk=sdb
>
> raid /boot --fstype=ext4 --level=RAID1 --device=md0 raid.1 raid.3
> raid pv.1 --level=RAID1 --device=md1 raid.2 raid.4
>
> volgroup vg0 --pesize=65536 pv.1
>
> logvol swap --name=swap --vgname=vg0 --size=4096
> logvol /tmp --fstype=ext4 --name=tmp --vgname=vg0 --size=4096
> --fsoptions="noexec,nosuid,nodev,noatime"
> logvol / --fstype=ext4 --name=root --vgname=vg0 --size=10240 --grow
> --fsoptions="defaults,noatime"
>
> Such partitioning scheme is now working. Anaconda is complaining about
> "ValueError: not enough free space in volume group"
>
> Buf if I remove --grow  from last logvol - everything is ok.
>
> I don't understand what I'm doing wrong, such kickstart works
> flawlessly for C6 installs.
>
> 2014-07-16 14:21 GMT+03:00 Borislav Andric :
>> I am testing some kickstarts on ESXi virtual machine with pair of 16GB disks.
>> Partitioning is lvm over raid.
>>
>> If i am using "logvol --grow i get  "ValueError: not enough free space in 
>> volume group"
>> Only workaround i can find is to add --maxsize=XXX where XXX is at least 
>> 640MB less than available.
>> (10 extents or 320Mb per created logical volume)
>>
>> Following snippet is failing with "DEBUG blivet: failed to set size: 640MB 
>> short"
>>
>> part raid.01 --size 512 --asprimary --ondrive=sda
>> part raid.02 --size   1 --asprimary --ondrive=sda --grow
>> part raid.11 --size 512 --asprimary --ondrive=sdb
>> part raid.12 --size   1 --asprimary --ondrive=sdb --grow
>> raid /boot   --fstype="xfs"   --device="md0" --level=RAID1 raid.01 
>> raid.11
>> raid pv.01   --fstype="lvmpv" --device="md1" --level=RAID1 raid.02 
>> raid.12
>> volgroup vg0 pv.01
>> logvol / --fstype="xfs" --grow --size=4096 --name=lvRoot 
>> --vgname=vg0
>> logvol swap  --fstype="swap"   --size=2048 --name=lvSwap 
>> --vgname=vg0
>>
>> If i only add --maxsize=13164 everything is working.
>> (but after install i have 640MB in 20 Free PE in vg0, for details see "after 
>> --maxsize install")
>>
>> logvol / --fstype="xfs" --grow --size=4096 --name=lvRoot 
>> --vgname=vg0
>> --changed to ->
>> logvol / --fstype="xfs" --grow --size=4096 --name=lvRoot 
>> --vgname=vg0 --maxsize=13164
>>
>>
>> Some interesting DEBUG lines :
>>
>>> 15840MB lvmvg vg0 (26)
>>> vg0 size is 15840MB
>>> Adding vg0-lvRoot/4096MB to vg0
>>> vg vg0 has 11424MB free
>>
>> should it be 11744 or there is 320MB overhead ?
>>
>>> Adding vg0-lvSwap/2048MB to vg0
>>> vg vg0 has 9056MB free
>>
>> 320MB missing again, total of 640MB
>>
>>> vg vg0: 9056MB free ; lvs: ['lvRoot', 'lvSwap']
>>
>> nice, i have 9056MB free in vg0 (640MB short but still ... )
>>
>>>  1 requests and 303 (9696MB) left in chunk
>>> adding 303 (9696MB) to 27 (vg0-lvRoot)
>>
>> wtf, who is counting what !!
>>
>>> failed to set size: 640MB short
>>
>>
>> Could anyone shed some light ?
>>
>>
>>
>>
>>
>> P.S.
>>
>> "after --maxsize install"
>> =
>> If i limit root logvol with --maxsize=13164, after installation i get 640MB 
>> of free space (20 Free PE).
>>
>>
>> Missing 640Mb is free according to lvm :
>> [root@c7-pxe-install ~]# pvdisplay
>>   --- Physical volume ---
>>   PV Name   /dev/md1
>>   VG Name   vg0
>>   PV Size   15.49 GiB / not usable 22.88 MiB
>>   Allocatable   yes
>>   PE Size   32.00 MiB
>>   Total PE  495
>

Re: [CentOS] kickstart - dont wipe data

2014-08-07 Thread Maxim Shpakov
Hi!

I think that your problem is here

volgroup v pv.00 --noformat

2014-08-07 13:06 GMT+03:00 Markus Falb :
> Hi,
> I am struggling with kickstart.
> What I want to achieve is a reinstall, but some data partitions should
> survive the install, i.e. they should not be formatted.
> With a single disk this works, here is the relevant part from the
> kickstart file (I shortened the name of the volume group)
>
> ...
> zerombr
> clearpart --none --initlabel
> part /boot --fstype="xfs"   --label=boot --onpart=vda1
> part pv.00 --fstype="lvmpv"  --onpart=vda2 --noformat
> volgroup v --noformat
> logvol / --fstype=xfs  --name=wurzel --vgname=v --useexisting
> logvol /home --fstype=ext4 --name=home   --vgname=v --noformat
> ...
>
> you see, / will be reformatted, /boot will be reformatted, but /home
> will not.
>
> Now a machine with md raid 1. I tried the following.
>
> ...
> #zerombr
> #clearpart --none --initlabel
>
> part raid.01 --onpart vda1 --noformat
> part raid.02 --onpart vdb1 --noformat
> raid /boot --fstype xfs --label boot --level 1 --device md0 --noformat
>
> part raid.11 --onpart vda2 --noformat
> part raid.12 --onpart vdb2 --noformat
> raid pv.00 --level 1 --device md1 --noformat
>
> volgroup v --noformat
> logvol / --fstype=xfs --name=wurzel --vgname=v --useexisting
> logvol /home --fstype=ext4 --name=home  --vgname=v --noformat
> ...
>
> But I get
>
> ...
> 02:54:21,069 ERR anaconda: storage configuration failed: The following
> problem occurred on line 6 of the kickstart file:
>
> No preexisting RAID device with the name "0" was found.
> ...
>
> What is wrong? I really want preserve data and only wipe system.
>
> --
> Kind Regards, Markus Falb
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] anaconda, kickstart, lvm over raid, logvol --grow, centos7 mystery

2014-07-31 Thread Maxim Shpakov
Hi!

I can confirm this.

--grow on LVM partition is broken for raid+lvm kickstart installs.


bootloader --location=mbr --driveorder=sda,sdb --append="net.ifnames=0
crashkernel=auto rhgb quiet"
zerombr
clearpart --all --drives=sda,sdb --initlabel

part raid.1 --asprimary --size=200 --ondisk=sda
part raid.2 --size=1 --grow --ondisk=sda
part raid.3 --asprimary --size=200 --ondisk=sdb
part raid.4 --size=1 --grow --ondisk=sdb

raid /boot --fstype=ext4 --level=RAID1 --device=md0 raid.1 raid.3
raid pv.1 --level=RAID1 --device=md1 raid.2 raid.4

volgroup vg0 --pesize=65536 pv.1

logvol swap --name=swap --vgname=vg0 --size=4096
logvol /tmp --fstype=ext4 --name=tmp --vgname=vg0 --size=4096
--fsoptions="noexec,nosuid,nodev,noatime"
logvol / --fstype=ext4 --name=root --vgname=vg0 --size=10240 --grow
--fsoptions="defaults,noatime"

Such partitioning scheme is now working. Anaconda is complaining about
"ValueError: not enough free space in volume group"

Buf if I remove --grow  from last logvol - everything is ok.

I don't understand what I'm doing wrong, such kickstart works
flawlessly for C6 installs.

2014-07-16 14:21 GMT+03:00 Borislav Andric :
> I am testing some kickstarts on ESXi virtual machine with pair of 16GB disks.
> Partitioning is lvm over raid.
>
> If i am using "logvol --grow i get  "ValueError: not enough free space in 
> volume group"
> Only workaround i can find is to add --maxsize=XXX where XXX is at least 
> 640MB less than available.
> (10 extents or 320Mb per created logical volume)
>
> Following snippet is failing with "DEBUG blivet: failed to set size: 640MB 
> short"
>
> part raid.01 --size 512 --asprimary --ondrive=sda
> part raid.02 --size   1 --asprimary --ondrive=sda --grow
> part raid.11 --size 512 --asprimary --ondrive=sdb
> part raid.12 --size   1 --asprimary --ondrive=sdb --grow
> raid /boot   --fstype="xfs"   --device="md0" --level=RAID1 raid.01 
> raid.11
> raid pv.01   --fstype="lvmpv" --device="md1" --level=RAID1 raid.02 
> raid.12
> volgroup vg0 pv.01
> logvol / --fstype="xfs" --grow --size=4096 --name=lvRoot 
> --vgname=vg0
> logvol swap  --fstype="swap"   --size=2048 --name=lvSwap 
> --vgname=vg0
>
> If i only add --maxsize=13164 everything is working.
> (but after install i have 640MB in 20 Free PE in vg0, for details see "after 
> --maxsize install")
>
> logvol / --fstype="xfs" --grow --size=4096 --name=lvRoot 
> --vgname=vg0
> --changed to ->
> logvol / --fstype="xfs" --grow --size=4096 --name=lvRoot 
> --vgname=vg0 --maxsize=13164
>
>
> Some interesting DEBUG lines :
>
>> 15840MB lvmvg vg0 (26)
>> vg0 size is 15840MB
>> Adding vg0-lvRoot/4096MB to vg0
>> vg vg0 has 11424MB free
>
> should it be 11744 or there is 320MB overhead ?
>
>> Adding vg0-lvSwap/2048MB to vg0
>> vg vg0 has 9056MB free
>
> 320MB missing again, total of 640MB
>
>> vg vg0: 9056MB free ; lvs: ['lvRoot', 'lvSwap']
>
> nice, i have 9056MB free in vg0 (640MB short but still ... )
>
>>  1 requests and 303 (9696MB) left in chunk
>> adding 303 (9696MB) to 27 (vg0-lvRoot)
>
> wtf, who is counting what !!
>
>> failed to set size: 640MB short
>
>
> Could anyone shed some light ?
>
>
>
>
>
> P.S.
>
> "after --maxsize install"
> =
> If i limit root logvol with --maxsize=13164, after installation i get 640MB 
> of free space (20 Free PE).
>
>
> Missing 640Mb is free according to lvm :
> [root@c7-pxe-install ~]# pvdisplay
>   --- Physical volume ---
>   PV Name   /dev/md1
>   VG Name   vg0
>   PV Size   15.49 GiB / not usable 22.88 MiB
>   Allocatable   yes
>   PE Size   32.00 MiB
>   Total PE  495
>>>Free PE   20
>   Allocated PE  475
>   PV UUID   uBLBqQ-Tpao-yPVj-1FVA-488x-Bs0K-ebQOmI
>
>
> And i can use it :
> [root@c7-pxe-install ~]# lvextend -L +640M vg0/lvRoot
>   Extending logical volume lvRoot to 13.47 GiB
>   Logical volume lvRoot successfully resized
>
> [root@c7-pxe-install ~]# xfs_growfs /
> meta-data=/dev/mapper/vg0-lvRoot isize=256agcount=4, 
> agsize=841728 blks
>  =   sectsz=512   attr=2, 
> projid32bit=1
>  =   crc=0
> data =   bsize=4096   blocks=3366912, 
> imaxpct=25
>  =   sunit=0  swidth=0 blks
> naming   =version 2  bsize=4096   ascii-ci=0 ftype=0
> log  =internal   bsize=4096   blocks=2560, version=2
>  =   sectsz=512   sunit=0 blks, 
> lazy-count=1
> realtime =none   extsz=4096   blocks=0, rtextents=0
> data blocks changed from 3

Re: [CentOS] using Red Hat site for documentation

2014-07-30 Thread Maxim Shpakov
2014-07-30 23:03 GMT+03:00 Valeri Galtsev :
> So, please, teach me something: how do I build enterprise level server
> based CentOS 7 which I'll be able to run 1-2 years without reboot (I did
> apologize already for being ignorant person ;-)
>

Oh, Valera, it seems you don't know about this:

http://www.kernelcare.com/try_it/install.php
http://www.cloudlinux.com/blog/clnews/kernelcare-for-centos-rhel-7.php
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [CentALT] php-redis depedency error.

2014-04-08 Thread Maxim Shpakov
Hi!

Add exclude=php55* to /etc/yum.repos.d/centalt.repo

There are problems regarding php55 dependencies in centalt repo.


2014-04-08 15:26 GMT+03:00 sunshareall0709 :
> Hi, All.
> When I try to install php-redis, there are obvious 4 package hit the 
> depedency.
> One have already INSTALLED as follow.
>
> I've GOOGLED, NOTHING help.
> How can I get over this? Please Help, Thank you!
> -- Sunshare
> -- 2014-04-08
>
>
> 
>
> [sunshare@sunshare ~]$ sudo yum install php-redis
> [sudo] password for sunshare:
> Loaded plugins: refresh-packagekit, security
> Setting up Install Process
> Resolving Dependencies
> --> Running transaction check
> ---> Package php-redis.x86_64 77:2.2.1-5.el6 will be installed
> --> Processing Dependency: php(zend-abi) = 20090626-x86-64 for package: 
> 77:php-redis-2.2.1-5.el6.x86_64
> --> Finished Dependency Resolution
> Error: Package: 77:php-redis-2.2.1-5.el6.x86_64 (CentALT)
>Requires: php(zend-abi) = 20090626-x86-64
>Installed: php-common-5.3.28-4.el6.x86_64 (@CentALT)
>php(zend-abi) = 20090626
>php(zend-abi) = 20090626-x86-64
>Available: php-common-5.3.3-26.el6.x86_64 (base)
>php(zend-abi) = 20090626
>Available: php-common-5.3.3-27.el6_5.x86_64 (updates)
>php(zend-abi) = 20090626
>Available: php55-common-5.5.11-1.el6.x86_64 (CentALT)
>php(zend-abi) = 20121212-64
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
>
> Following is my repo info:
> 
> [sunshare@sunshare ~]$ sudo yum makecache
> Loaded plugins: refresh-packagekit, security
> CentALT   
> |  951 B 
> 00:00
> CentALT/filelists 
> | 136 kB 
> 00:00
> CentALT/primary   
> | 102 kB 
> 00:00
> CentALT/other 
> | 758 kB 
> 00:03
> CentALT   
>   
>  318/318
> CentALT   
>   
>  318/318
> CentALT   
>   
>  318/318
> base  
> | 3.7 kB 
> 00:00
> base/group_gz 
> | 220 kB 
> 00:01
> base/filelists_db 
> | 5.9 MB 
> 00:33
> base/primary_db   
> | 4.4 MB 
> 00:22
> base/other_db 
> | 2.8 MB 
> 00:15
> epel  
> | 4.4 kB 
> 00:00
> epel/group_gz 
> | 237 kB 
> 00:01
> epel/filelists_db 
> | 8.4 MB 
> 00:42
> epel/primary_db   
> | 6.0 MB 
> 00:32
> epel/other_db 
> | 3.6 MB 
> 00:19
> epel/updateinfo   
> | 769 kB 
> 00:03
> extras

Re: [CentOS] looking for bind 9.9 for Centos 6 - found 'CentAlt'

2013-03-13 Thread Maxim Shpakov
You can build it yourself from SRPM they provide.

2013/3/13 Robert Moskowitz :
> Yesterday, in a hall conversation, I was strongly directed to bind 9.9.
> It can do the inline signing of a zone changes that 9.8 can't.
>
> So today, I went digging for someone supplying it all nicely packaged
> for me; my servers are all i386.
>
> I found a couple sources but the one that I found is:
>
> http://pkgs.org/centos-6-rhel-6/centalt-i386/
>
> but digging deeper I found it pointing to:
>
> http://centos.alt.ru/repository/centos/6/i386/
>
> So I am not so sure I want to get it from there.  What are my choices?
> I found another site, but it only has x86_64 rpms.
>
>
>
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Strange behavior from software RAID

2013-03-02 Thread Maxim Shpakov
Hello.
mdadm.conf is inside initrd file.
If you have updated the MD configuration you need to update your initrd file.

http://wiki.centos.org/TipsAndTricks/CreateNewInitrd

2013/3/3 Harold Pritchett :
> Here I am following up on my own post...
>
> It occurred to me that all of this stuff must be magic.
>
> How does it work when the mdadm.conf file is on a raid/LVM volume which is 
> not available at boot time?
>
> I looked in the /boot filesystem, the only one which is available at boot 
> time and there is nothing there, unless this data is actually saved in one of 
> the kernel modules or other
> binary files...
>
> Harold
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] fail2ban logrotate failure

2012-04-27 Thread Maxim Shpakov
https://github.com/fail2ban/fail2ban/issues/44

2012/4/27 Bob Hoffman :
> I got the fail2ban from epel.
> There were a number of issues relating to using a log file...
> logwatch was looking for both fail2ban and fail2ban.log
> logrotate file fail2ban added looked for fail2ban.log and then reset
> itself to syslog
> fail2ban itself went to syslog, over riding its fail2ban.log.
>
> took a while, but I use /var/log/fail2ban now, that finally worked
> through logrotates and logwatch.
>
> Problem with centos variant of fail2ban:
>
> logrotate causes all 'ban' actions to stop happening. I am pretty sure
> it stops reading the logs but still functions.
> Unban actions still keep showing up in the log, but the 'ban' actions
> just stop. Program is running, but no longer working.
>
> Long searches online show a million others with the same issue. Only way
> to prevent it seems to be to add a reload or restart in the syslog file.
> This is undesired due to losing all banned ips listed.
>
> It happens as part of the logrotate. The logrotate file I have changed a
> few times and recently tried this
>     postrotate
>       /usr/bin/fail2ban-client set logtarget /var/log/fail2ban
> 1>/dev/null || true
>     endscript
>
> setting the logtarget, which the original called for changing it to
> syslog and 2>dev/null || true
>
>
> so what would you do? I imagine when logrotate happens and syslog
> restarts something is causing fail2ban to stop working properly, but
> still timing 'unbans'.
>
> This is apparently a bug/problem for almost everyone of all distros.
> Other than just uninstalling, the only way to make it work would be a
> restart around 4 every morning, making any long term bans useless.
>
> My last thought is to just throw the /var/log/fail2ban to be rotated by
> syslog like maillog and the others..and not doing anything special.
> Maybe it would just work.
>
> I write here because I know there are hundreds of you and someone must
> have figured out how to make fail2ban work for more than 24 hours
> without a restart
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 5.8 Critical Samba Update

2012-03-05 Thread Maxim Shpakov
There is no 2.6.18-308 kernel there (

2012/3/5 John R Pierce :
> On 03/05/12 12:57 AM, Maxim Shpakov wrote:
>> Hello, what about Centos 5.8 release?
>> I need kernel release from it. Can I download it from CR ?
>
> yum update kernel
>
>
>
> --
> john r pierce                            N 37, W 122
> santa cruz ca                         mid-left coast
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 5.8 Critical Samba Update

2012-03-05 Thread Maxim Shpakov
Hello, what about Centos 5.8 release?
I need kernel release from it. Can I download it from CR ?

2012/2/24 Johnny Hughes :
> There is a critical update for samba for centos-5.8 ... we are working
> on CentOS-5.8 right now and I fully expect it to be released in a week
> or less.  For those of you who can not wait for a week, here is the
> samba critical update:
>
> http://people.centos.org/hughesjr/c58-samba/x8664/critical/
>
> http://people.centos.org/hughesjr/c58-samba/i386/critical/
>
> These may or may not work without the rest of 5.8 ... for those who do
> try them, please provide feedback here in this thread.
>
> Thanks,
> Johnny Hughes
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos