Hi all,
According to the official document, SparkContext can load datatable to 
dataframe using the DataSources API. However, it just supports the following 
properties:Property NameMeaningurlThe JDBC URL to connect to.dbtableThe JDBC 
table that should be read. Note that anything that is valid in a `FROM` clause 
of a SQL query can be used. For example, instead of a full table you could also 
use a subquery in parentheses.driverThe class name of the JDBC driver needed to 
connect to this URL. This class with be loaded on the master and workers before 
running an JDBC commands to allow the driver to register itself with the JDBC 
subsystem.partitionColumn, lowerBound, upperBound, numPartitionsThese options 
must all be specified if any of them is specified. They describe how to 
partition the table when reading in parallel from multiple workers. 
partitionColumn must be a numeric column from the table in question.It lets me 
confused how to pass the username, password or other info? BTW, I am connecting 
to Postgresql like this:    val dataFrame = sqlContext.load("jdbc", Map(      
"url" -> "jdbc:postgresql://192.168.1.110:5432/demo",  //how to pass username 
and password?      "driver" -> "org.postgresql.Driver",      "dbtable" -> 
"schema.tab_users"    ))
Thanks.
RegardsYi



Reply via email to