Re: STREAMPIPES-75: Extend data lake sink to store images

2020-02-19 Thread Philipp Zehnder
Hi Johannes,

yes this is a very good idea. We should refactor the file adapters to store the 
files in the service Dominik described.
I created an issue for that: STREAMPIPES 80: Use internal file service in file 
adapters.

Philipp


> On 19. Feb 2020, at 20:59, Johannes Tex  wrote:
> 
> Hi,
> 
> I also think a service for file handling would be a good solution. 
> 
> At the moment we also use files for the Adapters that are stored in the 
> Worker. 
> Maybe this would be another use case for a file service?
> 
> Johannes 
> 
> On 2020/02/19 06:58:11, Dominik Riemer  wrote: 
>> Hi Philipp,
>> 
>> yes, I think it makes sense to have a single service for handling files.
>> When writing the CSVMetadataEnrichment component for Chris, I started to add 
>> a simple file management to the backend and also extended the SDK with 
>> methods to receive files from the backend (see 
>> CsvMetadataEnrichmentController and FileServingResource in the backend).
>> 
>> We could extend this, isolate the file management to an individual 
>> microservice and add a simple API in front of it that can be used by all 
>> services that require to store or receive files (e.g., also for the included 
>> assets of pipeline elements, which could be documentation, icons or ML 
>> models).
>> 
>> Concerning HDFS, in my opinion this might be an option, but as we don't have 
>> very large amounts of data by now to store, it would probably be a bit of 
>> overkill here (one distributed system more to manage). 
>> 
>> Dominik
>> 
>> -Original Message-
>> From: Philipp Zehnder  
>> Sent: Tuesday, February 18, 2020 6:28 PM
>> To: dev@streampipes.apache.org
>> Subject: STREAMPIPES-75: Extend data lake sink to store images
>> 
>> Hi all,
>> 
>> I finished the implementation to store images in files instead of base 64 
>> Strings in InfluxDB.
>> 
>> For the first version I mounted a local volume and added the images in a 
>> folder in this volume. 
>> I think this is a good starting point because the images are stored in a 
>> local volume on the same host as the sink.
>> Now the question is how can users access those images? I would suggest to 
>> extend the data lake REST API for that.
>> Therefore, the backend must mount the same volume as the internal sink 
>> container with the data lake sink.
>> 
>> Does anyone of you have an alternative solution?
>> 
>> @Dominik, you implemented already an StreamPipes internal file storage. 
>> Could we use that for the images as well or would the frequency be too high?
>> 
>> @all What about HDFS. We could set up HDFS, for files. Similar to InfluxDB 
>> as a shared service between multiple containers
>> 
>> 
>> Philipp
>> 




[jira] [Created] (STREAMPIPES-80) Use internal file service in file adapters

2020-02-19 Thread Philipp Zehnder (Jira)
Philipp Zehnder created STREAMPIPES-80:
--

 Summary: Use internal file service in file adapters
 Key: STREAMPIPES-80
 URL: https://issues.apache.org/jira/browse/STREAMPIPES-80
 Project: StreamPipes
  Issue Type: Improvement
  Components: Connect
Reporter: Philipp Zehnder


As described in the thread: Re: STREAMPIPES-75: Extend data lake sink to store 
images on the mailing list. The uploaded files of the file stream and file set 
adapter should not be stored in the local container, instead they should be 
stored in the centralized file service.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Image Labeling

2020-02-19 Thread Johannes Tex
Hi,

I starts with @Dominik question: The first Intention was to be part of the 
Data-Explorer, with toggling between simple exploring and labelling. @Philipp 
opened an Issue [STREAMPIPES-79] to refactoring the Data explorer, maybe in 
this context we could extend the data explorer for this two modes? 
To display images, for example, we need almost the same mechanism like it is 
necessary for the image labelling, except the Labeling itself. We also need to 
extend the datalake API for images, which leads to @Philipp question. 

The data lake API supports, at the moment, just data that can be aggregated 
(numeric data). For the Image Labeling and viewing we need to extend the API. 
My proposal would be to create a paging API for images to the receive the next 
e.g. 10 images: It could be like this "/datalake/ //". 
What do you think? While this necessary extension we also can create the API to 
save the annotation.

I see three different options to save the annotations:
* Influx -> save annotation direct with data point
- when exporting need to create COCO file
- need extra place to save (image) Labels/Categories
- need to 'manupilate' data point, which is not possible in influx (just 
delete and create new one)
* File
 - need to handle a file
* CouchDB
- file generation is needed
My proposal is to use the CouchDB to use the annotations. 

Johannes


On 2020/02/17 21:12:38, Philipp Zehnder  wrote: 
> Hi Johannes,
> 
> as for the API, do you think we can extend the dataset API, or should we 
> create a separate REST API for image annotation?
> 
> Where do you plan to store the coco annotation information? In files or in a 
> DB?
> 
> Philipp
> 
> > On 16. Feb 2020, at 19:51, Dominik Riemer  wrote:
> > 
> > Hi Johannes,
> > sounds good!
> > I think bounding boxes and polygons are totally fine for the first 
> > prototype.
> > 
> > How to you plan to integrate the labeling tool, will it be part of the data 
> > explorer or do you plan to add a new component?
> > 
> > Dominik
> > 
> > On 2020/02/14 16:30:17, Johannes Tex  wrote: 
> >> Hi,
> >> 
> >> Philip started to extend the datalake sink to store images 
> >> [STREAMPIPES-75]. 
> >> I started now to create an Image labeler that allows users to label images 
> >> in the datalake. [STREAMPIPES-78]. The Labels will be stored in the COCO 
> >> Annonation Format. [1] After labeling, the images can be used to train an 
> >> NN. 
> >> 
> >> The main features that the labeler should support
> >> - Labeling with Bound boxes
> >> - Labeling with Polygons
> >> 
> >> Do you have additional features that should also be supported?
> >> 
> >> Johannes
> >> 
> >> 
> >> [1] http://cocodataset.org/#format-data
> >> 
> >> 
> >> 
> 
> 
> 


Re: RE: STREAMPIPES-75: Extend data lake sink to store images

2020-02-19 Thread Johannes Tex
Hi,

I also think a service for file handling would be a good solution. 

At the moment we also use files for the Adapters that are stored in the Worker. 
Maybe this would be another use case for a file service?

Johannes 

On 2020/02/19 06:58:11, Dominik Riemer  wrote: 
> Hi Philipp,
> 
> yes, I think it makes sense to have a single service for handling files.
> When writing the CSVMetadataEnrichment component for Chris, I started to add 
> a simple file management to the backend and also extended the SDK with 
> methods to receive files from the backend (see 
> CsvMetadataEnrichmentController and FileServingResource in the backend).
> 
> We could extend this, isolate the file management to an individual 
> microservice and add a simple API in front of it that can be used by all 
> services that require to store or receive files (e.g., also for the included 
> assets of pipeline elements, which could be documentation, icons or ML 
> models).
> 
> Concerning HDFS, in my opinion this might be an option, but as we don't have 
> very large amounts of data by now to store, it would probably be a bit of 
> overkill here (one distributed system more to manage). 
> 
> Dominik
> 
> -Original Message-
> From: Philipp Zehnder  
> Sent: Tuesday, February 18, 2020 6:28 PM
> To: dev@streampipes.apache.org
> Subject: STREAMPIPES-75: Extend data lake sink to store images
> 
> Hi all,
> 
> I finished the implementation to store images in files instead of base 64 
> Strings in InfluxDB.
> 
> For the first version I mounted a local volume and added the images in a 
> folder in this volume. 
> I think this is a good starting point because the images are stored in a 
> local volume on the same host as the sink.
> Now the question is how can users access those images? I would suggest to 
> extend the data lake REST API for that.
> Therefore, the backend must mount the same volume as the internal sink 
> container with the data lake sink.
> 
> Does anyone of you have an alternative solution?
> 
> @Dominik, you implemented already an StreamPipes internal file storage. Could 
> we use that for the images as well or would the frequency be too high?
> 
> @all What about HDFS. We could set up HDFS, for files. Similar to InfluxDB as 
> a shared service between multiple containers
> 
> 
> Philipp
> 


postgres sink

2020-02-19 Thread Florian Micklich

Hi all,


I was just using the postgres sink [0] and got an error.

I am using following docker container:

docker run --name "streampipes_postgis" -e POSTGRES_USER=streampipes -e 
POSTGRES_PASS=streampipes -e POSTGRES_DBNAME=streampipes -p 65432:5432 -d -t 
kartoza/postgis



The database is created and also the table. But saving the events in the DB is 
not working.

19:54:00.875 SP [Thread-2] WARN  o.a.s.s.d.jvm.postgresql.PostgreSql - USERLOG 
- correspondingPipeline: 839d7efd-d561-4731-80f9-343610fcdc5d - peURI: 
http://172.17.0.1:8005/sec/org.apache.streampipes.sinks.databases.jvm.postgresql/839d7efd-d561-4731-80f9-343610fcdc5d-org.streampipes.connect.ebf6c159-7576-4f7d-8e43-a79d4b5f8080-postgresql-0
 - Table 'testtable' was unexpectedly not found and gets recreated.
19:54:00.880 SP [Thread-2] ERROR o.a.s.s.d.jvm.postgresql.PostgreSql - USERLOG - 
correspondingPipeline: 839d7efd-d561-4731-80f9-343610fcdc5d - peURI: 
http://172.17.0.1:8005/sec/org.apache.streampipes.sinks.databases.jvm.postgresql/839d7efd-d561-4731-80f9-343610fcdc5d-org.streampipes.connect.ebf6c159-7576-4f7d-8e43-a79d4b5f8080-postgresql-0
 - ERROR: relation "testtable" already exists

The last message appears after every event is saved.

I had a quick look in the code but not able to find the reason so far. The code 
changes a lot compared to the last time I looked at it.




In the ensureDatabaseExists method in the jdbcClient I also saw a comment:

// Checks whether the database already exists (using catalogs has not worked 
with postgres)


If I use following query, I can check in postgres if a  database, table or even 
schema already exists. Maybe this is helpful???


String checkTableName = "SELECT EXISTS (SELECT table_name FROM information_schema.tables WHERE 
table_schema = '"+ schemaName + "' AND table_name = '"+tableName+"') AS result;";
String checkDatabaseName = "SELECT EXISTS (SELECT 1 FROM pg_database WHERE datname = '"+ 
databaseName + "') AS result;";
String checkSchemaName = "SELECT EXISTS (SELECT nspname FROM pg_catalog.pg_namespace WHERE 
nspname = '" + schemaName +"') AS result;";


I used this method:


   private boolean checkExistInPG(Connection conn, String query) {

   boolean exists = false;

   try (Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query)){


   if(rs.next()) {
   exists = rs.getBoolean(1);
   }

   } catch (SQLException e) {

   throw new SpRuntimeException("Check if database, table or schema  exists went 
wrong: " + e.getSQLState() +"\n" + e.getMessage());
   //e.printStackTrace();
   } finally {
   return exists;
   }
   }



I would also like to start a discussion about extending the postgres sink.

Would it be a good idea to support the user input "db schema" as well? At the 
moment the table is only written in the public schema.

I saw that the  jdbcClient is also used for the iotdb? Would this be 
compatible? Is this also a postgres db?

I am asking because I am thinking to extend the Postgres with the PostGIS 
extension as well.


Sorry this email is longer than expected :-D


Kind regards

Florian

[0] 
https://github.com/apache/incubator-streampipes-extensions/tree/dev/streampipes-sinks-databases-jvm/src/main/java/org/apache/streampipes/sinks/databases/jvm/postgresql


Disy Informationssysteme GmbH
Florian Micklich
Lösungsentwickler
+49 721 16006 477,  florian.mickl...@disy.net

Firmensitz: Ludwig-Erhard-Allee 6, 76131 Karlsruhe
Registergericht: Amtsgericht Mannheim, HRB 107964
Geschäftsführer: Claus Hofmann

Bitte beachten Sie folgende Informationen für Kunden, Lieferanten und Bewerber
- Datenschutz: www.disy.net/datenschutz
- Informationspflichten:  www.disy.net/informationspflichten



Re: Setup dev project in Intelij

2020-02-19 Thread Florian Micklich

Hi all,

I would prefer to put the documentation in a wiki and not in the project 
itself. So it is easier to find. Documentation in the resource folder is 
something else and already provided.

A block post is a good  idea after the story is expanded a bit with some other 
basic PEs :)


Thanks for the info with the Geo.lat and Geo.lng ontology. I was aware of this 
functionality and I first implemented it this way. But I was not able to add 
this in the adapter setting, so I switched back to numbers in my PE. I was 
using the ontology therms Geo.lat and Geo.lng in the adapter setting but this 
was not working.

Then I used the provided strings, Philipp mentioned  [0].

   http://www.w3.org/2003/01/geo/wgs84_pos#lat

   http://www.w3.org/2003/01/geo/wgs84_pos#long

This way it is working! So I will implement this change in the committed 
LatLngToGei PE, after a quick testrun.

Are there any other ontology settings worth mentioning? Not only for Geo? A 
drop down list would be also a good idea .

Greetings

Florian




Am 16.02.20 um 22:09 schrieb Philipp Zehnder:

Hi Florian,

sorry for the late reply.

Very cool, I tested your processors and they worked as expected and I will 
merge them directly.
Just one minor comment. Please try to avoid logging raw events to the console. 
This makes it harder to find errors and exceptions in the logs when the service 
runs in a docker container.

For domain properties (semantic types) of the latitude and longitude values in 
wgs84 you can use Geo.lat / Geo.lng [0].
If you add this to the requiredPropertyWithUnaryMapping the properties are then 
already pre-selected.

Regarding your question in the other mail about the env file in the module 
streampipes-processors-geo-jvm:
Each module should contain an env file for development to reduce the 
configuration effort for other developers. But I saw you already committed it 
in your pull request.

Your step by step guide in this email is very good, this would also be helpful 
for other developers.
My suggestion would be to add it to our developer documentation [1]: How to run 
processors in the project incubator-streampipes-extensions in IntelliJ
What do you think?

Regarding your second pull request: The documentation you provided [2] is 
awesome.
My question to the other members of the community would be, where would we best 
keep this documentation?
 *  Wiki
 *  Documentation
 *  directly in the project
 *  somewhere else

Maybe you could also write a short blogpost containing your descriptions? This 
might be a good getting started for new users.

Thanks again for your contribution, I really look forward to all the geo 
processors.

Cheers,
Philipp


[0] 
https://github.com/apache/incubator-streampipes/blob/dev/streampipes-vocabulary/src/main/java/org/apache/streampipes/vocabulary/Geo.java
 

[1] https://streampipes.apache.org/docs/docs/dev-guide-introduction/ 

[2] https://github.com/giviflo/incubator-streampipes-extensions/tree/feature/geo_jts_doc 


On 2020/02/11 20:39:24, Florian Micklich 
 wrote:


Hi Philipp,>

the incompatible pom settings didn't give me any rest this evening and I found 
probably the reason why.>

In the first attempt I just used "open" in Intellij to load the the 
"/incubator-streampipes-extensions/streampipes-processors-geo-jvm" project path.>

Tonight I used "import project" option in Intellij and followed the instruction 
steps:>

++ Select maven project where the pom file exists -->  
/incubator-streampipes-extensions/streampipes-processors-geo-jvm to impo>

++ import project from external model --> maven>

++ import project setup --> left all default settings as it is>

++ select profile --> java8-doclint-disable in my case (don't know what this 
means)>

++ select maven project to import --> 
org.apache.streampipes:streampipes-processors-geo-jvm:065.1-SNAPSHOT>

++ SELECT SDK --> 1.8 (in my case sdkman/candidates/java/8.0.232-zulu>

++ left project name and file location at it is>

++ .idea folder already exists. Overwrite --> yes>


==> sources will be loaded and almost all sources are available.>

Only following source couldn't be found:>

>
   streampipes-extensions>
   org.apache.streampipes>
   0.65.1-SNAPSHOT>
>


I copied my local env file into the develop folder. Run the ./sp start command in 
the installer folder and everything is running quite charming without any 
problems.>


[jira] [Created] (STREAMPIPES-79) Refactor Data Explorer

2020-02-19 Thread Philipp Zehnder (Jira)
Philipp Zehnder created STREAMPIPES-79:
--

 Summary: Refactor Data Explorer
 Key: STREAMPIPES-79
 URL: https://issues.apache.org/jira/browse/STREAMPIPES-79
 Project: StreamPipes
  Issue Type: Improvement
  Components: Backend, UI
Reporter: Philipp Zehnder
Assignee: Philipp Zehnder


Clean interfaces and UI components of the data explorer and extend it with 
functionality to view and download stored images.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Improving the Live Dashboard

2020-02-19 Thread Philipp Zehnder
Hi,

the new dashboard looks really cool and the new features are also very helpful. 
Regarding the number of widgets I am sure we will soon have more on the new 
dashboard than on the old one, since it is really simple to create new ones.

I am currently working on the integration of images into the data explorer 
stored with the data lake sink as described in STREAMPIPES-75.
During the development the question came up how we structure our core modules 
in the UI.
There are currently multiple directories containing shared components and 
services.

I actually like that we prefix those modules with “core” and I would suggest to 
move the data model from connect to the module core-model since it is used in 
multiple other modules.
Further I will move the components in core-ui into the data-explorer since we 
have a different widget concept for the new dashboard and cannot re-use them.

What are other components we can put into the core-ui directory? (Buttons, …)

Philipp

> On 19. Feb 2020, at 09:15, Dominik Riemer  wrote:
> 
> Hi all,
> 
> good news, finally a first working version of the new live dashboard is ready!
> 
> It's already merged in dev so that we can test if everything works.
> While we have not yet reached full feature-parity with the old dashboard in 
> terms of available widgets, the new dashboard also has some new features:
> - multiple dashboards can be created
> - widgets can be moved in a grid system and resized as needed
> - widgets can be configured after they have been placed in the dashboard
> - widget colors and titles can be flexibly changed
> - new widgets can be created easily using a similar configuration approach as 
> pipeline elements
> 
> It would be great if you could test the new implementation, provide some 
> feedback and discuss about things you'd like to see!
> 
> Dominik
> 
> -Original Message-
> From: Dominik Riemer  
> Sent: Tuesday, January 21, 2020 9:49 PM
> To: dev@streampipes.apache.org
> Subject: Re: AW: Improving the Live Dashboard
> 
> Hi all,
> 
> following up on this, to give you a quick status:
> The good news is, I've started to implement the new dashboard.
> The old dashboard will be completely refactored and upgraded to an Angular 2+ 
> module, and there will be some nice new features that I mentioned in the 
> corresponding Jira ticket.
> So the not-so-good news is that it'll probably take some time as it seems to 
> be a ton of work ;-)
> 
> I hope to have a first prototype working by the end of the month. 
> 
> So if you have any new ideas what to include/improve, I'll be happy to know!
> 
> Dominik
> 
> On 2019/12/03 07:59:33, Christofer Dutz  wrote: 
>> Hi folks,
>> 
>> One thing I'd enjoy would be to have more than the fixed maximum of 6 rows.
>> 
>> Chris
>> 
>> Von: Philipp Zehnder 
>> Gesendet: Dienstag, 3. Dezember 2019 07:16:55
>> An: dev@streampipes.apache.org 
>> Betreff: Re: Improving the Live Dashboard
>> 
>> Hi,
>> 
>> thanks for creating a first draft. I think it is a very good idea and also 
>> very useful.
>> In the data explorer we already created a prototype of the backend API to 
>> also read historic values. Additionally there are already two types of 
>> visualization available with a table and a line chart. We can re-use those 
>> and extend those.
>> Let me know when you have finished the first prototype. Then I can add those 
>> two.
>> 
>> We could also think about to integrate the asset dashboard as an additional 
>> visualization. What do you think about that?
>> 
>> Cheers,
>> Philipp
>> 
>> On 2019/12/02 21:54:45, "Dominik Riemer"  wrote:
>>> Hi all,>
>>> 
 
>>> 
>>> the Live Dashboard we use to display real-time charts of sensor 
>>> values can> be improved in terms of usability. Some issues we 
>>> currently have are:>
>>> 
>>> -Users can change the order of dashboard widgets, but modified>
>>> layouts are not persisted>
>>> 
>>> -Changes to widgets are not persisted, e.g., the order of table>
>>> columns as Chris mentioned in a previous mail>
>>> 
>>> -Created visualizations cannot be modified>
>>> 
>>> -Currently, only a single dashboard can be created, but it would be>
>>> nice to create multiple dashboards (e.g., to visualize the state of 
>>> multiple>
>>> assets) >
>>> 
 
>>> 
>>> So I think it would be great to refactor the dashboard, and at the 
>>> same> time, migrate the current implementation to Angular 7 (for 
>>> those of you who> are new, the UI currently uses both Angular 1 and 
>>> 7 components and we are> gradually upgrading old Angular 1 
>>> components to 7). I'd volunteer to create> a first prototype of the 
>>> new dashboard.>
>>> 
 
>>> 
>>> In general, I think the dashboard should be rather simple (we 
>>> shouldn't> probably compete with some very good visualization tools 
>>> that already> exist), but some things I'd like to see are:>
>>> 
>>> -Having the opportunity to create standalone dashboards 

Re: Time for a first Apache release?

2020-02-19 Thread Christofer Dutz
Hi Dominik,

I agree that this would be great ... after all ... incubation is about learning 
to release stuff. Nothing helps better than actually doing it.
If you need help in the process, just ask. I would prefer to stay in the 
background as helpful mentor for your first 1-2 releases as I have
noticed when I do the first releases, the learning curve is sort of not as it 
is supposed to be.

Perhaps the release documentation I just updated recently for PLC4X can be 
helpful: 
http://plc4x.apache.org/developers/release/release.html

Chris


Am 19.02.20, 09:21 schrieb "Dominik Riemer" :

Hi all,

 

since our last release is now roughly three months ago, I guess it's time to
discuss if we're ready for our first Apache release.

 

Are there any issues or features you'd like to complete before the next
release? From my side, it's mainly the dashboard.

 

Besides that, there are some licensing issues we should take care of before
the first release, but which shouldn't take that long:

-check 3rd party libraries in the UI source and add the proper
license headers

-check streampipes-measurement-units and how to handle licenses of
the used vocabularies

-extent the LICENSE/LICENSE-binary files with the UI dependencies

-clarify how to handle the streampipes-empire-rdf-dependency, which
is still published under org.streampipes coordinates.

 

So what do you think? Should we try for a first Apache release by the end of
the month?

 

Dominik

 

 





Time for a first Apache release?

2020-02-19 Thread Dominik Riemer
Hi all,

 

since our last release is now roughly three months ago, I guess it's time to
discuss if we're ready for our first Apache release.

 

Are there any issues or features you'd like to complete before the next
release? From my side, it's mainly the dashboard.

 

Besides that, there are some licensing issues we should take care of before
the first release, but which shouldn't take that long:

-check 3rd party libraries in the UI source and add the proper
license headers

-check streampipes-measurement-units and how to handle licenses of
the used vocabularies

-extent the LICENSE/LICENSE-binary files with the UI dependencies

-clarify how to handle the streampipes-empire-rdf-dependency, which
is still published under org.streampipes coordinates.

 

So what do you think? Should we try for a first Apache release by the end of
the month?

 

Dominik

 

 



RE: AW: Improving the Live Dashboard

2020-02-19 Thread Dominik Riemer
Hi all,

good news, finally a first working version of the new live dashboard is ready!

It's already merged in dev so that we can test if everything works.
While we have not yet reached full feature-parity with the old dashboard in 
terms of available widgets, the new dashboard also has some new features:
- multiple dashboards can be created
- widgets can be moved in a grid system and resized as needed
- widgets can be configured after they have been placed in the dashboard
- widget colors and titles can be flexibly changed
- new widgets can be created easily using a similar configuration approach as 
pipeline elements

It would be great if you could test the new implementation, provide some 
feedback and discuss about things you'd like to see!

Dominik

-Original Message-
From: Dominik Riemer  
Sent: Tuesday, January 21, 2020 9:49 PM
To: dev@streampipes.apache.org
Subject: Re: AW: Improving the Live Dashboard

Hi all,

following up on this, to give you a quick status:
The good news is, I've started to implement the new dashboard.
The old dashboard will be completely refactored and upgraded to an Angular 2+ 
module, and there will be some nice new features that I mentioned in the 
corresponding Jira ticket.
So the not-so-good news is that it'll probably take some time as it seems to be 
a ton of work ;-)

I hope to have a first prototype working by the end of the month. 

So if you have any new ideas what to include/improve, I'll be happy to know!

Dominik

On 2019/12/03 07:59:33, Christofer Dutz  wrote: 
> Hi folks,
> 
> One thing I'd enjoy would be to have more than the fixed maximum of 6 rows.
> 
> Chris
> 
> Von: Philipp Zehnder 
> Gesendet: Dienstag, 3. Dezember 2019 07:16:55
> An: dev@streampipes.apache.org 
> Betreff: Re: Improving the Live Dashboard
> 
> Hi,
> 
> thanks for creating a first draft. I think it is a very good idea and also 
> very useful.
> In the data explorer we already created a prototype of the backend API to 
> also read historic values. Additionally there are already two types of 
> visualization available with a table and a line chart. We can re-use those 
> and extend those.
> Let me know when you have finished the first prototype. Then I can add those 
> two.
> 
> We could also think about to integrate the asset dashboard as an additional 
> visualization. What do you think about that?
> 
> Cheers,
> Philipp
> 
> On 2019/12/02 21:54:45, "Dominik Riemer"  wrote:
> > Hi all,>
> >
> >  >
> >
> > the Live Dashboard we use to display real-time charts of sensor 
> > values can> be improved in terms of usability. Some issues we 
> > currently have are:>
> >
> > -Users can change the order of dashboard widgets, but modified>
> > layouts are not persisted>
> >
> > -Changes to widgets are not persisted, e.g., the order of table>
> > columns as Chris mentioned in a previous mail>
> >
> > -Created visualizations cannot be modified>
> >
> > -Currently, only a single dashboard can be created, but it would be>
> > nice to create multiple dashboards (e.g., to visualize the state of 
> > multiple>
> > assets) >
> >
> >  >
> >
> > So I think it would be great to refactor the dashboard, and at the 
> > same> time, migrate the current implementation to Angular 7 (for 
> > those of you who> are new, the UI currently uses both Angular 1 and 
> > 7 components and we are> gradually upgrading old Angular 1 
> > components to 7). I'd volunteer to create> a first prototype of the 
> > new dashboard.>
> >
> >  >
> >
> > In general, I think the dashboard should be rather simple (we 
> > shouldn't> probably compete with some very good visualization tools 
> > that already> exist), but some things I'd like to see are:>
> >
> > -Having the opportunity to create standalone dashboards (some>
> > companies we've talked to already mentioned that they would like to 
> > have> individual, configurable dashboards that can be used to 
> > display condition> data at the shop floor level)>
> >
> > -Such dashboards could be shared / implemented as web components to>
> > be integrated into other systems>
> >
> > -Read-only mode>
> >
> > -A very fluent way to modify the layout of widgets and the 
> > dashboard>
> > itself>
> >
> >  >
> >
> > So what do you think? Which features would you like to see in a 
> > real-time> dashboard to monitor IIoT data?>
> >
> >  >
> >
> > Dominik>
> >
> >  >
> >
> >
> 
> 
>