In the Slick examples there are a few examples of joining where one of the resulting columns can be nulls, as it can be the case when doing left, right, or outer joins. For example:
val explicitLeftOuterJoin = for {
(c, s) <- Coffees leftJoin Suppliers on (_.supID === _.id)
} yield (c.name, s.name.?)
But what if I want to return the entire mapped object? What I mean is:
val explicitLeftOuterJoin = for {
(c, s) <- Coffees leftJoin Suppliers on (_.supID === _.id)
} yield (c, s.?)
This doesn't seem to work as it complains about "could not find implicit value for evidence parameter of type scala.slick.lifted.TypeMapper[Suppliers]". Basically I'd like it to return a list of tuple of (Coffee, Option[Supplier])
Why doesn't this work and what's the fix for it? Especially, since this works fine:
val q = for {
c <- Coffees
s <- Suppliers
} yield (c, s)
(I know that's an inner join)
Source: (StackOverflow)
For joining between two tables is done like
(for {
(computer, company) <- Computers leftJoin Companies on (_.companyId === _.id)
if computer.name.toLowerCase like filter.toLowerCase()
}
But in case if joining required between more tables what is the right way trying below but doesnt work
(for {
(computer, company,suppliers) <- Computers leftJoin Companies on (_.companyId === _.id)
//not right leftjoin Suppliers on (_.suppId === _.id)
if computer.name.toLowerCase like filter.toLowerCase()
}
Source: (StackOverflow)
How do I get the auto-incremented values for records inserted with Slick? The following code prints 1111. I would have expected it to print 1234
import scala.slick.driver.H2Driver.simple._
object TestMappedTable extends App{
case class User(id: Option[Int], first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id.? ~ first ~ last <> (User, User.unapply _)
}
implicit val session = Database.forURL("jdbc:h2:mem:test1", driver = "org.h2.Driver").createSession()
session.withTransaction{
Users.ddl.create
print(Users.insert(User(None, "Jack", "Green" )))
print(Users.insert(User(None, "Joe", "Blue" )))
print(Users.insert(User(None, "John", "Purple" )))
print(Users.insert(User(None, "Jim", "Yellow" )))
}
}
I'm using Slick 0.11.2 for Scala 2.10.0-RC1
Source: (StackOverflow)
I am using SLICK 1.0.0-RC2. I have defined the following two tables Directorate and ServiceArea where Directorate has a one to many relationship with ServiceArea
case class Directorate(dirCode: String, name: String)
object Directorates extends Table[Directorate]("DIRECTORATES") {
def dirCode = column[String]("DIRECTORATE_CODE", O.PrimaryKey)
def name = column[String]("NAME")
def * = dirCode ~ name <> (Directorate, Directorate.unapply _)
}
case class ServiceArea(areaCode: String, dirCode: String, name: String)
object ServiceAreas extends Table[ServiceArea]("SERVICE_AREAS") {
def areaCode = column[String]("AREAE_CODE", O.PrimaryKey)
def dirCode = column[String]("DIRECTORATE_CODE")
def name = column[String]("NAME")
def directorate = foreignKey("DIR_FK", dirCode, Directorates)(_.dirCode)
def * = areaCode ~ dirCode ~ name <> (ServiceArea, ServiceArea.unapply _)
}
To make the Directorate case class useful in my Play application form I am trying to redefine the Directorate case class to have a Seq of ServiceAreas that are related to that Directorate.
case class Directorate(dirCode: String, name: String, serviceAreas: Seq[ServiceArea])
My problem is now with the Directorate table projection. I have attempted to create a method in Directorates:
def serviceAreas = (for { a <- ServiceAreas
if (a.dirCode === dirCode)
} yield (a)).list map {
case t: ServiceArea => t
}
so that I can try something like
def * = dirCode ~ name ~ serviceAreas <> (Directorate, Directorate.unapply _)
but this cannot not work as serviceAreas only goes one way.
It seems reasonable to me that for the Directorate case class to be a useful domain object that it should be able contain the related ServiceAreas.
I'm wondering how I should traverse the inverse relationship so that Directorate table projection will work.
Source: (StackOverflow)
I'm trying to filter against an optional date column with Scala Slick 1.0.1.
It may be I just don't see it, but I've got a table that looks something like this:
case class UserRole(id:UUID, userID:UUID, role:String)
object UserRole extends Table[UserRole]("User_Role") {
//(id: Long = 0l, name: String, active: Boolean) extends KeyedEntity[Long] {
def id = column[UUID]("ID", O.PrimaryKey)
def userID = column[UUID]("user_id")
def vendorID = column[UUID]("vendor_id")
def role = column[String]("role")
def user = foreignKey("user_FK", userID, User)(_.id)
def start = column[java.sql.Date]("startDate")
def endDate = column[Option[java.sql.Date]]("endDate")
def * = id ~ userID ~ role <> (UserRole.apply _, UserRole.unapply _)
}
You'll see there that the endDate is optional.
How do I construct a query where I filter so endDate can be NULL/None or greater than the current (db) date? FYI, I'm generally using the embedded api
thanks
Source: (StackOverflow)
I'm currently writing Slick code to target an old schema with two tables > 22 columns. How do I use the new HList code? I've got 2.0-M3 working fine in other respects under Scala 2.10.3.
Here's the syntax I'm currently using with case classes / tuples. What would I do to use the new HLists mentioned in the docs?
case class Joiner(
id: Int,
name: Option[String],
contact: Option[String]
)
class Joiners(tag: Tag) extends Table[Joiner](tag, "joiner") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc, O.DBType("int(11)"))
def name = column[Option[String]]("name", O.DBType("varchar(255)"))
def contact = column[Option[String]]("contact", O.DBType("text"))
def * = (id, name.?, contact.?) <> (Joiner.tupled, Joiner.unapply)
}
val joiners = TableQuery[Joiners]
I don't see any in the examples and only a brief mention in the newly updated docs. I'm new to Scala as well as Slick.
Source: (StackOverflow)
According to the Slick 2.0 documentation,
to get the count of rows in a table:
val q1 = coffees.length
// compiles to SQL (simplified):
// select count(1) from "COFFEES"
However, it turns out that coffees.length
is of type Column[Int]
.
How does one execute the query and get the value?
Source: (StackOverflow)
I'm new to Scala (using 2.10) and Slick (using 2.0-M2). I see that one of the ways to get around the 22 column limit for tables in Slick is to use nested tuples. I can't figure out how to do that, despite finding this partial code on GitHub.
Current dev branch Scala (2.11-M5) supports case classes with more than 22 elements, but not tuples with arity > 22. And Slick is not yet distributed for Scala 2.11 pre-releases. How can I define a 33 column table (and have it work with all Slick's syntactic sugar)?
N.B., I'm trying to support an existing schema and can't change the table normalization.
Source: (StackOverflow)
I am using Slick 1, and I have to be able to apply a filter in a query to lookup all entities that match a condition in a related table.
This example using the Slick documentation shows what I am trying to do (this is a contrived example that is close to my situation).
Here, I want all coffees that are provided by suppliers on the west coast. I want the Coffee only, I am only interested in navigating to Suppliers to apply the filter:
val westCoast = Seq("CA", "OR", "WA")
val implicitInnerJoin = for {
c <- Coffees
s <- Suppliers if c.supID === s.id && s.state inSet westCoast
} yield c
This works ok, but it will duplicate Coffees if there is more than one match in the Suppliers table.
The obvious workaround is in normal SQL to do a SELECT DISTINCT; however, I cannot find a way to do that here.
You could in theory do a:
query.list.distinct
After the results are already returned; however, I have also implemented PAGING support, so you wouldn't want to process the results once the already come back from the database. Here is the paging support:
query.drop(offset).take(limit).list
So, in a nutshell, I need a way to specify SELECT DISTINCT in my query that goes out.
Anyone have any ideas?
Source: (StackOverflow)
How does one insert records into PostgreSQL using AutoInc keys with Slick mapped tables? If I use and Option for the id in my case class and set it to None, then PostgreSQL will complain on insert that the field cannot be null. This works for H2, but not for PostgreSQL:
//import scala.slick.driver.H2Driver.simple._
//import scala.slick.driver.BasicProfile.SimpleQL.Table
import scala.slick.driver.PostgresDriver.simple._
import Database.threadLocalSession
object TestMappedTable extends App{
case class User(id: Option[Int], first: String, last: String)
object Users extends Table[User]("users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = id.? ~ first ~ last <> (User, User.unapply _)
def ins1 = first ~ last returning id
val findByID = createFinderBy(_.id)
def autoInc = id.? ~ first ~ last <> (User, User.unapply _) returning id
}
// implicit val session = Database.forURL("jdbc:h2:mem:test1", driver = "org.h2.Driver").createSession()
implicit val session = Database.forURL("jdbc:postgresql:test:slicktest",
driver="org.postgresql.Driver",
user="postgres",
password="xxx")
session.withTransaction{
Users.ddl.create
// insert data
print(Users.insert(User(None, "Jack", "Green" )))
print(Users.insert(User(None, "Joe", "Blue" )))
print(Users.insert(User(None, "John", "Purple" )))
val u = Users.insert(User(None, "Jim", "Yellow" ))
// println(u.id.get)
print(Users.autoInc.insert(User(None, "Johnathan", "Seagul" )))
}
session.withTransaction{
val queryUsers = for {
user <- Users
} yield (user.id, user.first)
println(queryUsers.list)
Users.where(_.id between(1, 2)).foreach(println)
println("ID 3 -> " + Users.findByID.first(3))
}
}
Using the above with H2 succeeds, but if I comment it out and change to PostgreSQL, then I get:
[error] (run-main) org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
org.postgresql.util.PSQLException: ERROR: null value in column "id" violates not-null constraint
Source: (StackOverflow)
I don't know how to configure GuiceApplicationBuilder
in such a way, that I am able to load controllers that require a DatabaseConfigProvider
to be injected.
I'd like to specify an alternative postgres database for testing, or an in memory database (if that is possible).
Code
class User
extends MySpecs
with OneAppPerTest
{
override def newAppForTest( testData: TestData ) = new GuiceApplicationBuilder()
// Somehow bind a database here, I guess?
.build()
"A test" should "test" in
{
val result = Application.instanceCache[api.controller.User]
.apply( app )
.list()( FakeRequest() )
...
}
}
Stacktrace
[info] - should return an entity *** FAILED ***
[info] com.google.inject.ConfigurationException: Guice configuration errors:
[info]
[info] 1) No implementation for play.api.db.slick.DatabaseConfigProvider was bound.
[info] while locating play.api.db.slick.DatabaseConfigProvider
[info] for parameter 1 at api.controller.User.<init>(User.scala:22)
[info] while locating api.controller.User
[info]
[info] 1 error
[info] at com.google.inject.internal.InjectorImpl.getProvider(InjectorImpl.java:1042)
[info] at com.google.inject.internal.InjectorImpl.getProvider(InjectorImpl.java:1001)
[info] at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1051)
[info] at play.api.inject.guice.GuiceInjector.instanceOf(GuiceInjectorBuilder.scala:321)
[info] at play.api.inject.guice.GuiceInjector.instanceOf(GuiceInjectorBuilder.scala:316)
[info] at play.api.Application$$anonfun$instanceCache$1.apply(Application.scala:234)
[info] at play.api.Application$$anonfun$instanceCache$1.apply(Application.scala:234)
[info] at play.utils.InlineCache.fresh(InlineCache.scala:69)
[info] at play.utils.InlineCache.apply(InlineCache.scala:55)
[info] ...
Source: (StackOverflow)
I have a very simple query which in SQL
can be represented as follows:
SELECT
c.id,
count(cp.product_id)
FROM cart c LEFT OUTER JOIN cart_product cp ON c.id = cp.cart_id
WHERE c.id = 3
GROUP BY c.id;
I was very surprised when using Slick DSL
to represent above query, the query generated from following DSL
:
Cart.joinLeft(CartProduct)
.on { case (c, cp) => c.id === cp.cartId }
.filter { case (c, cp) => c.id === 3 }
.groupBy { case (c, cp) => c.id }
.map { case (c, pr) => (c, pr.length)
}
Looked as follows:
SELECT
x2.x3,
count(1)
FROM (SELECT
x4.x5 AS x3,
x4.x6 AS x7,
x8.x9 AS x10,
x8.x11 AS x12,
x8.x13 AS x14,
x8.x15 AS x16
FROM (SELECT
x17."id" AS x5,
x17."user_id" AS x6
FROM "cart" x17) x4 LEFT OUTER JOIN (SELECT
1 AS x9,
x18."id" AS x11,
x18."cart_id" AS x13,
x18."product_id" AS x15
FROM "cart_product" x18) x8 ON x4.x5 = x8.x13) x2
WHERE x2.x3 = 3
GROUP BY x2.x3;
What am I doing wrong? Is it normal to see such nested queries? What is the point of using Slick DSL if the complexity of query grows so quickly? I could probably write native SQL
however I really liked Slick DSL
. What are the techniques of optimizing Slick
queries?
Source: (StackOverflow)
I have the follow enum:
object LoginStatus extends Enumeration() with BitmaskedEnumeration {
type LoginStatus = Value
val Active = Value("A")
val Inactive = Value("I")
}
I need to persist the value of the enum "A", but when the sql is generated the result is 0.
this is the table mapping:
object LoginTable extends Table[Login]("login") {
def idLogin = column[Int]("idlogin", O.PrimaryKey, O.AutoInc)
def cdLogin = column[String]("cdlogin", O.NotNull)
def cdPass = column[String]("cdPass", O.NotNull)
def stLogin = column[LoginStatus]("stlogin", O.NotNull, O.DBType("character(1)"))
}
how to persiste the enum value?
I implemented
implicit val charMapper = MappedTypeMapper.base[Char, String](
b => b.toString(),
i => i.charAt(0))
implicit def enum2StringMapper(enum: Enumeration) = MappedTypeMapper.base[enum.Value, Char](
b => b.toString.charAt(0),
i => enum.withName(i.toString))
implicit val LoginStatusMapper = enum2StringMapper(LoginStatus)
but result in:
[error] c.Login - Invalid value for type int : A
Source: (StackOverflow)
I want to force slick to create queries like
select max(price) from coffees where ...
But slick's documentation doesn't help
val q = Coffees.map(_.price) //this is query Query[Coffees.type, ...]
val q1 = q.min // this is Column[Option[Double]]
val q2 = q.max
val q3 = q.sum
val q4 = q.avg
Because those q1-q4 aren't queries, I can't get the results but can use them inside other queries.
This statement
for {
coffee <- Coffees
} yield coffee.price.max
generates right query but is deprecated (generates warning: " method max in class ColumnExtensionMethods is deprecated: Use Query.max instead").
How to generate such query without warnings?
Another issue is to aggregate with group by:
"select name, max(price) from coffees group by name"
Tried to solve it with
for {
coffee <- Coffees
} yield (coffee.name, coffee.price.max)).groupBy(x => x._1)
which generates
select x2.x3, x2.x3, x2.x4 from (select x5."COF_NAME" as x3, max(x5."PRICE") as x4 from "coffees" x5) x2 group by x2.x3
which causes obvious db error
column "x5.COF_NAME" must appear in the GROUP BY clause or be used in an aggregate function
How to generate such query?
Source: (StackOverflow)
I am in the process of migrating from Slick to Slick 2, and in Slick 2 you are meant to use the tupled
method when projecting onto a case class (as shown here http://slick.typesafe.com/doc/2.0.0-RC1/migration.html)
The problem is when the case class has a companion object, i.e. if you have something like this
case class Person(firstName:String,lastName:String) {
}
Along with a companion object
object Person {
def something = "rawr"
}
In the same scope, the tupled
method no longer works, because its trying to run tupled
on the object
, instead of the case class
.
Is there a way to retrieve the case class
of Person
rather than the object
, so you can call tupled
properly?
Source: (StackOverflow)