Wow.

OK, the hal SMF does not have the properties that are listed in the 
Examples on the hald.1m man page.
After clearing and attempting to enable hal using svcadm, I ran the 
daemon by hand with the hald command:

hald --daemon=yes --use-syslog --verbose=yes

At that point, hal was online and running accdg to svcs -x hal.
Because I was unable to change the property, or run
svcadm enable hal, I didn't think that the setting was in the 
repository, and would hold.

It holds for all practical purposes.

Booting up is very slow, and  the login screen is damaged - very light font.
However, the desktop is now fully functional - totally robust.  Hurrah.

Thank you so much,
Sharon

Darren Kenny wrote:
> Hi Sharon,
>
> Firstly check if SUNWhal is properly installed (pkgchk -v SUNWhal).
>
> If HAL is maintenance mode, try just "clearing" it first:     
>
>       svcadm clear svc:/system/hal:default
>
> It may be that hal simply entered maintenance mode due to dbus not running 
> (it's
> a dependency of HAL).
>
> If it still doesn't start after that, then you'll have to try enabling some
> verbose output, for that you can see more in the hald manpage:
>
>        # svccfg
>        svc:> select hal
>        svc:/system/hal> listprop hal/*
>        hal/verbose          boolean  false
>        hal/use_syslog       boolean  false
>        svc:/system/hal> setprop hal/verbose=true
>        svc:/system/hal> exit
>
> This might give you more, so try clearing it again, and see if there is more 
> in
> the log file.
>
> Probably the best people to look at any remaining HAL issues are the
> tamarach-discuss at opensolaris.org people...
>
> HTH,
>
> Darren.
>
> Sharon Veach wrote:
>   
>> Hi, Darren:
>>
>> Thank you for the pointers.  DBus is online; HAL is in maintenance. ZFS 
>> is OK.
>> And svcadm enable hal doesn't work, nor does svcadm refresh hal. 
>> The log isn't very helpful - maybe the config file is corrupt, but its 
>> location is not mentioned, nor is its name.
>> If I can re-install particular packages, do you know which ones I should 
>> add back in?
>>
>> more /var/svc/log/system-hal:default.log
>> [ Oct 12 20:35:18 Disabled. ]
>> [ Oct 12 20:35:18 Rereading configuration. ]
>> [ Oct 12 20:36:05 Enabled. ]
>> [ Oct 12 20:37:21 Executing start method ("/lib/svc/method/svc-hal
>> start") ]
>> [ Oct 12 20:38:33 Method "start" exited with status 0 ]
>> [ Oct 12 20:38:33 Rereading configuration. ]
>> [ Oct 12 20:38:33 No 'refresh' method defined.  Treating as :true.
>> ]
>> [ Oct 15 08:50:08 Stopping because service disabled. ]
>> [ Oct 15 08:50:08 Executing stop method (:kill) ]
>> [ Oct 15 09:03:26 Executing start method ("/lib/svc/method/svc-hal
>> start") ]
>> [ Oct 15 09:04:30 Method "start" exited with status 0 ]
>> [ Oct 15 09:10:14 Executing start method ("/lib/svc/method/svc-hal
>> start") ]
>> [ Oct 15 09:11:21 Method "start" exited with status 0 ]
>> [ Oct 19 12:08:39 Stopping because service disabled. ]
>> [ Oct 19 12:08:39 Executing stop method (:kill) ]
>> [ Oct 24 09:30:20 Executing start method ("/lib/svc/method/svc-hal start") ]
>> hal failed to start: error 2
>> [ Oct 24 09:34:30 Method "start" exited with status 95 ]
>> [ Oct 24 09:38:08 Executing start method ("/lib/svc/method/svc-hal start") ]
>> hal failed to start: error 2
>> [ Oct 24 09:42:19 Method "start" exited with status 95 ]
>> [ Oct 25 10:24:37 Rereading configuration. ]
>>
>> thanx for any other pointers, Sharon
>>
>>
>> Darren Kenny wrote:
>>     
>>> Hi Sharon,
>>>
>>> In the first two cases (g-v-m and ospm-applet) it would appear that either 
>>> (or
>>> both) HAL and D-Bus system services are not running, check using:
>>>
>>>     svcs hal dbus
>>>
>>> Both should be in the "online" state. If they are offline enable them using:
>>>
>>>     svcs enable dbus
>>>     svcs enable hal
>>>
>>> If they are in maintenance mode, then you will need more information, which
>>> should be able to locate using:
>>>
>>>     svcs -xv hal dbus
>>>
>>> w.r.t. ZFS - I have no idea why you're seeing problems, but a good start 
>>> would
>>> be to check the pool status:
>>>
>>>     zpool status -v
>>>
>>> This might point you towards any problems there, if there are any.
>>>
>>> HTH,
>>>
>>> Darren.
>>>
>>> Sharon Veach wrote:
>>>   
>>>       
>>>> I am running Nevada build 71 on 2 systems. One system is fine.  The 
>>>> other was working until I powered down
>>>> my system last Friday. Both systems use the same $HOME information.
>>>>
>>>> I get 2 dialog boxes:
>>>>
>>>> * gnome-volume-manager crashed
>>>>
>>>> * application ospm crashed
>>>>
>>>> Can this be fixed, or do I need to reinstall some or all of b71?
>>>>
>>>> During booting, the screen shows " ZFS: configuring  " for a long time 
>>>> (yes, I have a ZFS partition).
>>>> When the login screen appears, the fonts are odd - thinner than they 
>>>> should be.
>>>>
>>>> The desktop appears and dialog boxes with the above 2 errors appear.  
>>>> The ospm one would re-display
>>>> for awhile. It might still be behind some windows.
>>>>
>>>> I have no Launch menu, cannot move items to other workspaces, but can 
>>>> type and use applications in the
>>>> workspace where the app was launched.  Audio is just machine noise.
>>>> I have no background picture, and the system seems to be running
>>>> slowly.  An application that usually takes 2 minutes to launch is taking 
>>>> over 5 minutes.
>>>>
>>>> Ideas?
>>>>
>>>> Thanx, Sharon
>>>> _______________________________________________
>>>> desktop-discuss mailing list
>>>> desktop-discuss at opensolaris.org
>>>>     
>>>>         

Reply via email to