Hello,
what is the currently preferred method, in terms of stability and
performance, for exporting a CephFS directory with Samba?
- locally mount the CephFS directory and export it via Samba?
- using the "vfs_ceph" module of Samba?
Best,
Martin
___
com/en/luminous/; will not work any more.
Is it planned to make the documentation of the older version available
again through doc.ceph.com?
Best,
Martin
On Sat, Nov 21, 2020 at 2:11 AM Dan Mick wrote:
>
> On 11/14/2020 10:56 AM, Martin Palma wrote:
> > Hello,
> >
> > maybe I missed the
Hello,
maybe I missed the announcement but why is the documentation of the
older ceph version not accessible anymore on docs.ceph.com?
Best,
Martin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Thanks for the clarification. And by raising the PG should we also set
"no scrub" and "no deep scrub" during data movement? What is here the
recommandation?
On Fri, Aug 28, 2020 at 10:25 AM Stefan Kooman wrote:
>
> On 2020-08-28 09:36, Martin Palma wrote:
> > To
To set these setting during runtime I use the following commands from
my admin node:
ceph tell osd.* injectargs '--osd-max-backfills 1'
ceph tell osd.* injectargs '--osd-recovery-max-active 1'
ceph tell osd.* injectargs '--osd-op-queue-cut-off high'
Right?
On Wed, Aug 19, 2020 at 11:08 AM
in the pool and the whole pool.
On Thu, Aug 20, 2020 at 11:55 AM Martin Palma wrote:
>
> On one pool, which was only a test pool, we investigated both OSDs
> which host the inactive and incomplete PG with the following command:
>
> % ceph-objectstore-tool --data-path /var/lib/ceph/osd
an output. So we marked that PG on that OSD as complete. This
solved the inactive/incomplete PG for that pool.
The other PGs are from our main CephFS pool and we have the fear that
by doing the above we could lose access to the whole pool and data.
On Thu, Aug 20, 2020 at 11:49 AM Martin Palma wrote
Dan van der Ster wrote:
>
> Did you already mark osd.81 as lost?
>
> AFAIU you need to `ceph osd lost 81`, and *then* you can try the
> osd_find_best_info_ignore_history_les option.
>
> -- dan
>
>
> On Thu, Aug 20, 2020 at 11:31 AM Martin Palma wrote:
> >
e PGs (on a size 1 pool). I haven't tried any of
> > that but it could be worth a try. Apparently it only would work if the
> > affected PGs have 0 objects but that seems to be the case, right?
> >
> > Regards,
> > Eugen
> >
> > [1]
> > https://
If Ceph consultants are reading this please feel free to contact me
off list. We are seeking for someone who can help us of course we will
pay.
On Mon, Aug 17, 2020 at 12:50 PM Martin Palma wrote:
>
> After doing some research I suspect the problem is that during the
> cluster was ba
story_les_bound". We tried to set
the "osd_find_best_info_ignore_history_les = true" but with no success
the OSDs keep in a peering loop.
On Mon, Aug 17, 2020 at 9:53 AM Martin Palma wrote:
>
> Here is the output with all OSD up and running.
>
> ceph -s: https://pastebi
Here is the output with all OSD up and running.
ceph -s: https://pastebin.com/5tMf12Lm
ceph health detail: https://pastebin.com/avDhcJt0
ceph osd tree: https://pastebin.com/XEB0eUbk
ceph osd pool ls detail: https://pastebin.com/ShSdmM5a
On Mon, Aug 17, 2020 at 9:38 AM Martin Palma wrote:
>
&
reatly raising choose_total_tries, eg. 200 may be
> the solution to your problem):
> ceph osd crush dump | jq '[.rules, .tunables]'
>
> Peter
>
> On 8/16/20 1:18 AM, Martin Palma wrote:
> > Yes, but that didn’t help. After some time they have blocked requests again
> &
Yes, but that didn’t help. After some time they have blocked requests again
and remain inactive and incomplete.
On Sat, 15 Aug 2020 at 16:58, wrote:
> Did you tried to restart the sayed osds?
>
>
>
> Hth
>
> Mehmet
>
>
>
> Am 12. August 2020 2
> Are the OSDs online? Or do they refuse to boot?
Yes. They are up and running and not marked as down or out of the cluster.
> Can you list the data with ceph-objectstore-tool on these OSDs?
If you mean the "list" operation on the PG works if an output for example:
$ ceph-objectstore-tool
Hello,
after an unexpected power outage our production cluster has 5 PGs
inactive and incomplete. The OSDs on which these 5 PGs are located all
show "stuck requests are blocked":
Reduced data availability: 5 pgs inactive, 5 pgs incomplete
98 stuck requests are blocked > 4096 sec. Implicated
Hi, What is the maximum number of files per directory? I could find
the answer in the docs.
Best,
Martin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Yes in the end we are in the process of doing it, but we first
upgraded the MDSs which worked fine and it solved the problem we had
with CephFS.
Best,
Martin
On Wed, Feb 26, 2020 at 9:34 AM Konstantin Shalygin wrote:
>
> On 2/26/20 12:49 AM, Martin Palma wrote:
> > is it possibl
Hi Patrick,
we have performed a minor upgrade to 12.2.13 which resolved the issue.
We think it was the following bug:
https://tracker.ceph.com/issues/37723
Best,
Martin
On Thu, Feb 20, 2020 at 5:16 AM Patrick Donnelly wrote:
>
> Hi Martin,
>
> On Thu, Feb 13, 2020 at 4:10 AM
Hi,
is it possible to run MDS on a newer version than the monitoring nodes?
I mean we run monitoring nodes on 12.2.10 and would like to upgrade
the MDS to 12.2.13 is this possible?
Best,
Martin
___
ceph-users mailing list -- ceph-users@ceph.io
To
s one point you need to remember when returning interfaces:
> > make sure you handle nil correctly:
> >
> > type S struct {}
> >
> > type I interface {}
> >
> > func f() *S {
> > return nil
> > }
> >
> > func g() I {
> > x:=f()
I'm wondering If it is ok (or good Go code) if an interface method returns
an other interface? Here an example:
type Querier interface {
Query() string
}
type Decoder interface {
DecodeAndValidate() Querier
}
--
You received this message because you are subscribed to the Google Groups
wrote:
> On Thu, Sep 26, 2019 at 1:14 PM Martin Palma wrote:
> >
> > Hello,
> >
> > I'm in the process of writing an HTTP API with Go. I use a middleware
> for generating and validating JWT tokens. On any incoming request the
> middleware checks the JWT and validates i
Hello,
I'm in the process of writing an HTTP API with Go. I use a middleware for
generating and validating JWT tokens. On any incoming request the
middleware checks the JWT and validates it. If valid it adds it to the
request header and calls the next handler.
Is it save to use the JWT in
Hi Will,
there is a dedicated mailing list for ceph-ansible:
http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com
Best,
Martin
On Thu, Jan 31, 2019 at 5:07 PM Will Dennis wrote:
>
> Hi all,
>
>
>
> Trying to utilize the ‘ceph-ansible’ project
> (https://github.com/ceph/ceph-ansible ) to
Upgrading to 4.15.0-43-generic fixed the problem.
Best,
Martin
On Fri, Jan 25, 2019 at 9:43 PM Ilya Dryomov wrote:
>
> On Fri, Jan 25, 2019 at 9:40 AM Martin Palma wrote:
> >
> > > Do you see them repeating every 30 seconds?
> >
> > yes:
> >
> > Jan
> Do you see them repeating every 30 seconds?
yes:
Jan 25 09:34:37 sdccgw01 kernel: [6306813.737615] libceph: mon4
10.8.55.203:6789 session lost, hunting for new mon
Jan 25 09:34:37 sdccgw01 kernel: [6306813.737620] libceph: mon3
10.8.55.202:6789 session lost, hunting for new mon
Jan 25 09:34:37
Hi Ilya,
thank you for the clarification. After setting the
"osd_map_messages_max" to 10 the io errors and the MDS error
"MDS_CLIENT_LATE_RELEASE" are gone.
The messages of "mon session lost, hunting for new new mon" didn't go
away... can it be that this is related to
We are experiencing the same issues on clients with CephFS mounted
using the kernel client and 4.x kernels.
The problem shows up when we add new OSDs, on reboots after
installing patches and when changing the weight.
Here the logs of a misbehaving client;
[6242967.890611] libceph: mon4
Hello,
maybe a dump question but is there a way to correlate the ceph kernel
module version with a ceph specific version? For example can I figure
this out using "modinfo ceph"?
Whats the best way to check if a specific client is running at least
at Luminous?
Best,
Martin
Same here also on Gmail with G Suite.
On Mon, Oct 8, 2018 at 12:31 AM Paul Emmerich wrote:
>
> I'm also seeing this once every few months or so on Gmail with G Suite.
>
> Paul
> Am So., 7. Okt. 2018 um 08:18 Uhr schrieb Joshua Chen
> :
> >
> > I also got removed once, got another warning once
t; Mons are also on a 30s timeout.
> Even a short loss of quorum isn‘t noticeable for ongoing IO.
>
> Paul
>
> > Am 04.10.2018 um 11:03 schrieb Martin Palma :
> >
> > Also monitor election? That is the most fear we have since the monitor
> > nodes will no see each other
Also monitor election? That is the most fear we have since the monitor
nodes will no see each other for that timespan...
On Thu, Oct 4, 2018 at 10:21 AM Paul Emmerich wrote:
>
> 10 seconds is far below any relevant timeout values (generally 20-30
> seconds); so you will be fine without any
Hi all,
our Ceph cluster is distributed across two datacenter. Due do network
maintenance the link between the two datacenter will be down for ca. 8
- 10 seconds. In this time the public network of Ceph between the two
DCs will also be down.
What can we do of best handling this scenario to have
Thanks for the suggestions, and will future check for LVM volumes,
etc... the kernel version is the following 3.10.0-327.4.4.el7.x86_64
and the OS is CentOS 7.2.1511 (Core)
Best,
Martin
On Mon, Sep 10, 2018 at 12:23 PM Ilya Dryomov wrote:
>
> On Mon, Sep 10, 2018 at 10:46 AM Martin Palma
We are trying to unmap an rbd image form a host for deletion and
hitting the following error:
rbd: sysfs write failed
rbd: unmap failed: (16) Device or resource busy
We used commands like "lsof" and "fuser" but nothing is reported to
use the device. Also checked for watcher with "rados -p pool
Since Prometheus uses a pull model over HTTP for collecting metrics.
What are the best practices to secure these HTTP endpoints?
- With a reverse proxy with authentication?
- Export the node_exporter only on the cluster network? (not usable
for the mgr plugin and for nodes like mons, mdss,...)
-
Hi all,
In our current production cluster we have the following CRUSH
hierarchy, see https://pastebin.com/640Q4XSH or the attached image.
This reflects 1:1 real physical deployment. We currently use also a
replica factor of 3 with the following CRUSH rule on our pools:
rule hdd_replicated {
Hello,
Is it possible to get directory/file layout information (size, pool)
of a CephFS directory directly from a metadata server without the need
to mount the fs? Or better through the restful plugin...
When mounted I can get infos about the directory/file layout using the
getfattr command...
Just run into this problem on our production cluster
It would have been nice if the release notes of 12.2.4 had been
adapted to inform user about this.
Best,
Martin
On Wed, Mar 14, 2018 at 9:53 PM, Gregory Farnum wrote:
> On Wed, Mar 14, 2018 at 12:41 PM, Lars
ta distribution should be the same. If you were to reset the
> weights for the previous OSDs, you would only incur an additional round of
> reweighting for no discernible benefit.
>
> On Mon, Feb 26, 2018 at 7:13 AM Martin Palma <mar...@palma.bz> wrote:
>>
>> Hell
Hello,
from some OSDs in our cluster we got the "nearfull" warning message so
we run the "ceph osd reweight-by-utilization" command to better
distribute the data.
Now we have expanded out cluster with new nodes should we reverse the
weight of the changed OSDs to 1.0?
Best,
Martin
Hello,
is there a way to get librados for MacOS? Has anybody tried to build
librados for MacOS? Is this even possible?
Best,
Martin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
Calamari is deprecated, it was replaced by the ceph-mgr [0] from what I know.
Bye,
Martin
[0] http://docs.ceph.com/docs/master/mgr/
On Wed, Jul 19, 2017 at 6:28 PM, Oscar Segarra wrote:
> Hi,
>
> Anybody has been able to setup Calamari on Centos7??
>
> I've done a
Can the "sortbitwise" also be set if we have a cluster running OSDs on
10.2.6 and some OSDs on 10.2.9? Or should we wait that all OSDs are on
10.2.9?
Monitor nodes are already on 10.2.9.
Best,
Martin
On Fri, Jul 14, 2017 at 1:16 PM, Dan van der Ster wrote:
> On Mon, Jul
Thank you for the clarification and yes we saw that v10.2.9 was just
released. :-)
Best,
Martin
On Fri, Jul 14, 2017 at 3:53 PM, Patrick Donnelly <pdonn...@redhat.com> wrote:
> On Fri, Jul 14, 2017 at 12:26 AM, Martin Palma <mar...@palma.bz> wrote:
>> So only the ceph-mds i
So only the ceph-mds is affected? Let's say if we have mons and osds
on 10.2.8 and the MDS on 10.2.6 or 10.2.7 we would be "safe"?
I'm asking since we need to add new storage nodes to our production cluster.
Best,
Martin
On Wed, Jul 12, 2017 at 10:44 PM, Patrick Donnelly
> [429280.254400] attempt to access beyond end of device
> [429280.254412] sdi1: rw=0, want=19134412768, limit=19134412767
We are seeing the same for our OSDs which have the journal as a
separate partition always on the same disk and only for OSDs which we
added after our cluster was upgraded to
Hi Wido,
thank you for the clarification. We will wait until recovery is over
we have plenty of space on the mons :-)
Best,
Martin
On Tue, Jan 31, 2017 at 10:35 AM, Wido den Hollander <w...@42on.com> wrote:
>
>> Op 31 januari 2017 om 10:22 schreef Martin Palma <mar...@palma.bz
Hi all,
our cluster is currently performing a big expansion and is in recovery
mode (we doubled in size and osd# from 600 TB to 1,2 TB).
Now we get the following message from our monitor nodes:
mon.mon01 store is getting too big! 18119 MB >= 15360 MB -- 94% avail
Reading [0] it says that it is
Could you pls tell me where it is on the monitor nodes? only in the memory or
> persisted in any files or DBs? Looks like it’s not just in memory but I
> cannot find where those value saved, thanks!
>
> Best Regards,
> Dave Chen
>
> From: Martin Palma [mailto:mar...@palma.bz]
&g
Hi,
They are stored on the monitore nodes.
Best,
Martin
On Fri, 20 Jan 2017 at 04:53, Chen, Wei D wrote:
> Hi,
>
>
>
> I have read through some documents about authentication and user
> management about ceph, everything works fine with me, I can create
>
> a user and
d to ensure that the user that will
> perform the "snap unprotect" has the "allow class-read object_prefix
> rbd_children" on all pools [1].
>
> [1] http://docs.ceph.com/docs/master/man/8/ceph-authtool/#capabilities
>
> On Thu, Jan 12, 2017 at 10:56 AM, Martin Palm
Hi all,
what permissions do I need to unprotect a protected rbd snapshot?
Currently the key interacting with the pool containing the rbd image
has the following permissions:
mon 'allow r'
osd 'allow rwx pool=vms'
When I try to unprotect a snaphost with the following command "rbd
snap unprotect
Thanks all for the clarification.
Best,
Martin
On Mon, Dec 5, 2016 at 2:14 PM, John Spray <jsp...@redhat.com> wrote:
> On Mon, Dec 5, 2016 at 12:35 PM, David Disseldorp <dd...@suse.de> wrote:
>> Hi Martin,
>>
>> On Mon, 5 Dec 2016 13:27:01 +0100, Martin Palma
Ok, just discovered that with the fuse client, we have to add the '-r
/path' option, to treat that as root. So I assume the caps 'mds allow
r' is only needed if we also what to be able to mount the directory
with the kernel client. Right?
Best,
Martin
On Mon, Dec 5, 2016 at 1:20 PM, Martin Palma
Hello,
is it possible prevent cephfs client to mount the root of a cephfs
filesystem and browse through it?
We want to restrict cephfs clients to a particular directory, but when
we define a specific cephx auth key for a client we need to add the
following caps: "mds 'allow r'" which then gives
> I was wondering how exactly you accomplish that?
> Can you do this with a "ceph-deploy create" with "noin" or "noup" flags
> set, or does one need to follow the manual steps of adding an osd?
You can do it either way (manual or with ceph-deploy). Here are the
steps using ceph-deploy:
1. Add
42on.com> wrote:
>
>> Op 9 augustus 2016 om 17:44 schreef Martin Palma <mar...@palma.bz>:
>>
>>
>> Hi Wido,
>>
>> thanks for your advice.
>>
>
> Just keep in mind, you should update the CRUSHMap in one big bang. The
>
Hi Wido,
thanks for your advice.
Best,
Martin
On Tue, Aug 9, 2016 at 10:05 AM, Wido den Hollander <w...@42on.com> wrote:
>
>> Op 8 augustus 2016 om 16:45 schreef Martin Palma <mar...@palma.bz>:
>>
>>
>> Hi all,
>>
>> we are in the process o
Hi all,
we are in the process of expanding our cluster and I would like to
know if there are some best practices in doing so.
Our current cluster is composted as follows:
- 195 OSDs (14 Storage Nodes)
- 3 Monitors
- Total capacity 620 TB
- Used 360 TB
We will expand the cluster by other 14
I assume you installed Ceph using 'ceph-deploy'. I noticed the same
thing on CentOS when deploying a cluster for testing...
As Wido already noted the OSDs are marked as down & out. From each OSD
node you can do a "ceph-disk activate-all" to start the OSDs.
On Mon, Jul 18, 2016 at 12:59 PM, Wido
It seems that the packages "ceph-release-*.noarch.rpm" contain a
ceph.repo pointing to the baseurl
"http://ceph.com/rpm-hammer/rhel7/$basearch; which does not exist. It
should probably point to "http://ceph.com/rpm-hammer/el7/$basearch;.
- Martin
On Thu, Jul 7, 2016 at 5:57 P
Hi All,
it seems that the "rhel7" folder/symlink on
"download.ceph.com/rpm-hammer" does not exist anymore therefore
ceph-deploy fails to deploy a new cluster. Just tested it by setting
up a new lab environment.
We have the same issue on our production cluster currently, which
keeps us of
;
> -Original Message-
> From: m...@palma.bz [mailto:m...@palma.bz] On Behalf Of Martin Palma
> Sent: Wednesday, June 15, 2016 16:03
> To: DAVY Stephane OBS/OCB
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Failing upgrade from Hammer to Jewel on Cento
Hi Stéphane,
We had the same issue:
https://www.mail-archive.com/ceph-users%40lists.ceph.com/msg27507.html
Since then we have applied the fix suggested by Dan by simple adding
"ceph-disk activate-all" to rc.local
Best,
Martin
On Wed, Jun 15, 2016 at 10:39 AM, wrote:
ichael J. Kidd
> Sr. Software Maintenance Engineer
> Red Hat Ceph Storage
> +1 919-442-8878
>
> On Tue, Mar 15, 2016 at 11:41 AM, Martin Palma <mar...@palma.bz> wrote:
>>
>> Hi all,
>>
>> The documentation [0] gives us the following formula
Hi all,
The documentation [0] gives us the following formula for calculating
the number of PG if the cluster is bigger than 50 OSDs:
(OSDs * 100)
Total PGs =
pool size
When we have mixed storage server (HDD disks and SSD disks) and we
have
:
> Hi,
>
> To clarify, I didn't notice this issue in 0.94.6 specifically... I
> just don't trust the udev magic to work every time after every kernel
> upgrade, etc.
>
> -- Dan
>
> On Mon, Mar 7, 2016 at 10:20 AM, Martin Palma <mar...@palma.bz> wrote:
>> Hi Dan,
>>
&g
local.
> (We use this all the time anyway just in case...)
>
> -- Dan
>
> On Mon, Mar 7, 2016 at 9:38 AM, Martin Palma <mar...@palma.bz> wrote:
>> Hi All,
>>
>> we are in the middle of patching our OSD servers and noticed that
>> after rebooting no OSD
Hi All,
we are in the middle of patching our OSD servers and noticed that
after rebooting no OSD disk is mounted and therefore no OSD service
starts.
We have then to manually call "ceph-disk-activate /dev/sdX1" for all
our disk in order to mount and start the OSD service again.
Here a the
Hi Maruthi,
happy to hear that it is working now.
Yes, with the latest stable release, infernalis, the "ceph" username is
reserved for the Ceph daemons.
Best,
Martin
On Tuesday, 5 January 2016, Maruthi Seshidhar
wrote:
> Thank you Martin,
>
> Yes, "nslookup "
Hi Maruthi,
and did you test that DNS name lookup properly works (e.g. nslookup
ceph-mon1 etc...) on all hosts?
>From the output of 'ceph-deploy' it seem that the host can only resolve
it's own name but not the others:
[ceph-mon1][DEBUG ] "monmap": {
[ceph-mon1][DEBUG ] "created":
Currently, we use approach #1 with kerberized NFSv4 and Samba (with AD as
KDC) - desperately waiting for CephFS :-)
Best,
Martin
On Tue, Dec 15, 2015 at 11:51 AM, Wade Holler wrote:
> Keep it simple is my approach. #1
>
> If needed Add rudimentary HA with pacemaker.
>
>
Hi,
from what I'm seeing your ceph.conf isn't quite right if we take into
account you cluster description "...with one monitor node and one osd...".
The parameters "mon_inital_members" and "mon_host" should only contain
monitor nodes. Not all the nodes in you cluster.
More over you should
my perspective.
> >
> > And a swap partition is still needed even though the memory is big.
> > Martin Palma 于 2015年9月18日,下午11:07写道: Hi,
> >
> > Is it a good idea to use a software raid for the system disk (Operating
> > System) on a Ceph storage node?
Hi,
Is it a good idea to use a software raid for the system disk (Operating
System) on a Ceph storage node? I mean only for the OS not for the OSD
disks.
And what about a swap partition? Is that needed?
Best,
Martin
___
ceph-users mailing list
should be
your correct approach.
Hope this helps,
Thanks Regards
Somnath
*From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
Of *Martin Palma
*Sent:* Saturday, May 30, 2015 1:37 AM
*To:* ceph-users@lists.ceph.com
*Subject:* [ceph-users] SSD disk distribution
Hello,
We are planing to deploy our first Ceph cluster with 14 storage nodes and 3
monitor nodes. The storage node have 12 SATA disks and 4 SSDs. 2 of the
SSDs we plan to use as
journal disks and 2 for cache tiering.
Now the question raised in our team if it would be better to put all SSDs
lets
Public bug reported:
I can't find the pam_tty_audit.so module. From what i know it should get
installed with the package 'libpam-modules'.
** Affects: pam (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs,
Hello,
not sure if this is the right place but is there a way to use Mozilla
Minefield as internal web browser in Eclipse?
Bye,
Martin
___
wtp-dev mailing list
wtp-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/wtp-dev
/swt/faq.php#browserplatforms
Thanks,
Tim deBoer
deb...@ca.ibm.com
[image: Inactive hide details for Martin Palma ---01/05/2010 10:06:45
AM---Hello, not sure if this is the right place but is there a wa]Martin
Palma ---01/05/2010 10:06:45 AM---Hello, not sure if this is the right place
Hi,
I found this on the Beaglewiki on the Ubuntu installation HowTo
(http://www.beaglewiki.org/index.php/UbuntuInstall):
The Beagle 0.0.8 and 0.0.9 installations have a small bug which means
that the Evolution address book can't be searched. You can fix it by:
cd /usr/lib
sudo ln -s
Didi you solve your problem in some way? I have your identical problem..
No, sorry!
Since I have beagle, I have no F-spot :-(! I have goggled a lot but
nothing... there are some people with the same problem on the
ubuntu-user Mailinglist, but they have also no solution for the problem.
bye,
Hi Leute,
also ich habe folgendes Problem: Auf meinem Router lasse ich mittels
fetchmail meine Mails vom webserver holen und filtere sie dann mit
procmail und dieses legt sie dann in ein Maildir eines Benutzers auf dem
Router ab. So jetzt habe ich mit procmail schöne Filterregeln erstellt
dass
Hi,
install only the woody minimal system. Then edit your
/etc/apt/source.list from
deb http://http.us.debian.org/debian stable main contrib non-free to
deb http://http.us.debian.org/debian testing main contrib non-free
back @ the console write:
apt-get update
apt-get -u upgrade or apt-get -u
Hi,
install only the woody minimal system. Then edit your
/etc/apt/source.list from
deb http://http.us.debian.org/debian stable main contrib non-free to
deb http://http.us.debian.org/debian testing main contrib non-free
back @ the console write:
apt-get update
apt-get -u upgrade or apt-get
Hallo,
erstmal entschuldigt bitte dass ich erst jetzt wieder zurück schreibe,
aber habe moment viel um die Ohren :-)
So jetzt der Reihe nach:
Bjoern Schmidt wrote
Evtl. reicht es die Zeile FontPath unix/:7110 in der X Config
Habe ich gemacht, aber nichts geht sichtbar schneller!
Raphael
Hallo Zusammen,
also hab folgends problem: mein x-server starte extrem langsam. Wenn ich
das System boote und es wird dann der gdm gestartet so warte ich ca.1-2
minuten bis ich mich einloggen kann. Das gleiche ist wenn ich mich als
user beim system abmelde.
Ich dachte es läge vielleicht an den
Hallo Zusammen,
also hab folgends problem: mein x-server starte extrem langsam. Wenn ich
das System boote und es wird dann der gdm gestartet so warte ich ca.1-2
minuten bis ich mich einloggen kann. Das gleiche ist wenn ich mich als
user beim system abmelde.
Ich dachte es läge vielleicht an den
hast du das Paket menu installiert? Das erzeugt Debian-spezifische
Menü-Einträge.
nein habe ein solches packet nicht gefunden.
grüße
martin
--
Haeufig gestellte Fragen und Antworten (FAQ):
http://www.de.debian.org/debian-user-german-FAQ/
Zum AUSTRAGEN schicken Sie eine Mail an [EMAIL
Ich habe es in dem Debian-Menü gehabt, aber nicht im Gnome-Menü. Dafür
gibt es das Paket openoffice.org-gnome
ja genau das hat mir gefehlt nach der installation war das menü im
gnome-menü vorhanden!
Danke!
grüße
martin
--
Haeufig gestellte Fragen und Antworten (FAQ):
Hallo,
kein sein das die Debian Mailinglist für spam mistbrauch wird? Den seit
ich mich in die maillinglist eingetragen habe werde ich von spam zu
gemühlt!?
grüße
martin
--
Haeufig gestellte Fragen und Antworten (FAQ):
http://www.de.debian.org/debian-user-german-FAQ/
Zum AUSTRAGEN schicken
Hallo,
habe mir gerade unter sid das openoffice installiert nur kann ich es im
menü nirgends findet? woran kann das liegen?
grüße
martin
--
Haeufig gestellte Fragen und Antworten (FAQ):
http://www.de.debian.org/debian-user-german-FAQ/
Zum AUSTRAGEN schicken Sie eine Mail an [EMAIL
Hallo Liste,
kann mir jemand bitte sagen, wie ich es hinbekomme, beim start DMA für
meine beiden Festplaten zu aktivieren. Im laufenden Betrieb nutze ich
'hdparm -d1 /dev/hda(b)'.
einfach das tool hwtools installieren. Dort kannst du diese
Einstellungen dann in einem init-script
95 matches
Mail list logo