alamb opened a new issue, #8013:
URL: https://github.com/apache/arrow-rs/issues/8013

   This ticket tracks potentially improving performance of i256 --> f64 
conversion via manually twiddling bits from @scovich
   
   Bit twiddling that I _think_ would work:
   
   1. Define `i256::leading_zeros()` that follows the semantics of all the 
other `leading_zeros` for integral types
       ```rust
       impl i256 {
           pub fn leading_zeros(&self) -> u32 {
               match self.high {
                   0 => 128 + self.low.leading_zeros(),
                   _ => self.high.leading_zeros(),
               }
           }
       }
       ```
   2. Define a notion of "redundant leading sign bits" in terms of leading 
zeros:
       ```rust
       fn redundant_leading_sign_bits_i256(n: i256) -> u32 {
           let mask = n >> 255; // all ones or all zeros
           (n ^ mask).leading_zeros() - 1; // we only need one sign bit
       }
       ```
   3. Shift out all redundant leading sign bits when converting to f64:
       ```rust
       fn i256_to_f64(n: i256) -> f64 {
           let k = redundant_leading_sign_bits_i256(n);
           let n = n << k; // left-justify (no redundant sign bits)
           let n = (n.high >> 64) as i64; // throw away the lower 192 bits
           (n as f64) * f64::powi(2.0, 192-k) // convert to f64 and scale it
       }
       ```
   
   The above should work for both positive and negative values
   
   _Originally posted by @scovich in 
https://github.com/apache/arrow-rs/pull/7986#discussion_r2230070906_
               


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to