Hey,

I just tried out something. Not sure if there is a better way than this.
See if this works for you:

Assuming the text file contains the following line:

$cat /tmp/drill/abc.csv
aaaaabbbbbbbbbb421.5cc

(field1=aaaaa [chars 1-5], field2=bbbbbbbbbb [chars 6-15], field3=42
[digits 16-17], field4=1.5 [digits 18-20], ... )

A query similar to the one shown below can work:

> select
         cast(columns[0] as char(5)),
         `right`(cast(columns[0] as char(15)),10),
          cast(`right`(cast(columns[0] as char(17)),2) as int),
          cast(`right`(cast(columns[0] as char(20)),3) as double)
    from dfs.tmp.`/drill/abc.csv`;

+------------+------------+------------+------------+
|   EXPR$0   |   EXPR$1   |   EXPR$2   |   EXPR$3   |
+------------+------------+------------+------------+
| aaaaa      | bbbbbbbbbb | 42         | 1.5          |
+------------+------------+------------+------------+

Regards,
Abhishek
On Mon, Apr 20, 2015 at 3:53 PM, Yousef Lasi <[email protected]> wrote:

> Does anyone have any suggestions on querying fixed-length data files with
> Drill? These are files that are received from a mainframe source and the
> fields within a row are defined by length. For example, field1 = characters
> 1-12, field2 = characters 13-22 etc.
>
>  Thanks
>

Reply via email to