Hi Ken,

I don't believe so, my main disk should be used to store /tmp.

Timothys-MacBook-Pro:~ tfarkas$ diskutil list

/dev/disk0 (internal):

   #:                       TYPE NAME                    SIZE
IDENTIFIER

   0:      GUID_partition_scheme                         251.0 GB   disk0

   1:                        EFI EFI                     314.6 MB   disk0s1

   2:          Apple_CoreStorage Macintosh HD            250.0 GB   disk0s2

   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3


/dev/disk1 (internal, virtual):

   #:                       TYPE NAME                    SIZE
IDENTIFIER

   0:                  Apple_HFS Macintosh HD           +249.7 GB   disk1

                                 Logical Volume on disk0s2

                                 DA9C82BE-D97D-4D65-8166-9F742F9AC884

                                 Unencrypted


Timothys-MacBook-Pro:~ tfarkas$ mount

/dev/disk1 on / (hfs, local, journaled)

devfs on /dev (devfs, local, nobrowse)

map -hosts on /net (autofs, nosuid, automounted, nobrowse)

map auto_home on /home (autofs, automounted, nobrowse)



Thanks,

Tim

On Fri, Jun 14, 2019 at 12:33 PM Ken Krugler <kkrugler_li...@transpac.com>
wrote:

> Hi Tim,
>
> I wouldn’t expect these tests to consume 30GB of space.
>
> Any chance your temp dir is using a mount point with much less free space?
>
> — Ken
>
>
> On Jun 14, 2019, at 12:28 PM, Timothy Farkas <timothytiborfar...@gmail.com>
> wrote:
>
> Hi All,
>
> I get *Caused by: java.io.IOException: No space left on device* errors from
> some tests when running the flink unit tests on my mac. I have 30 GB free
> space on my machine and I am building the latest code from the master
> branch. The following tests in flink-runtime are failing with this error
>
> [INFO] Results:
>
> [INFO]
>
> [ERROR] Errors:
>
> [ERROR]
>   
> SlotCountExceedingParallelismTest.testNoSlotSharingAndBlockingResultBoth:91->submitJobGraphAndWait:97
> » JobExecution
>
> [ERROR]
>   
> SlotCountExceedingParallelismTest.testNoSlotSharingAndBlockingResultReceiver:84->submitJobGraphAndWait:97
> » JobExecution
>
> [ERROR]
>   
> SlotCountExceedingParallelismTest.testNoSlotSharingAndBlockingResultSender:77->submitJobGraphAndWait:97
> » JobExecution
>
> [ERROR]
>   ScheduleOrUpdateConsumersTest.testMixedPipelinedAndBlockingResults:128
> » JobExecution
>
> I tried reducing the test parallelism with  -Dflink.forkCount=2 , however
> that did not help. I'm confident that the tests are the issue since I can
> see disk usage increase in real-time as I run the tests. After the tests
> complete, the disk usage decreases.
>
> Is this a known issue? Or would this be something worth investigating as an
> improvement?
>
> Thanks,
> Tim
>
>
> --------------------------
> Ken Krugler
> +1 530-210-6378
> http://www.scaleunlimited.com
> Custom big data solutions & training
> Flink, Solr, Hadoop, Cascading & Cassandra
>
>

Reply via email to