We currently have two methods to detect unicode characters. Which is
better? Primarily on performance. I want to consolidated. There might be
other such code scattered! 

        public static boolean hasMultibyte(String value){
            if( value == null )return false;
            for(int i = 0 ; i < value.length() ; i++ ){
                char c = value.charAt(i);
                if(c > 0xFF )return true;
            }
            return false;
        }
        
         public static boolean isUnicodeFormat(final String format) {
            try {
              return !format.equals( new String(format.getBytes ("ISO-8859-1"),
"ISO-8859-1"));
            } catch (UnsupportedEncodingException e) {
              return true;
            }
          }
-- 


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
Mailing List:    http://jakarta.apache.org/site/mail2.html#poi
The Apache Jakarta POI Project: http://jakarta.apache.org/poi/

Reply via email to