EzDev.org

scalacheck

Property-based testing for Scala ScalaCheck


How can I test Java programs with ScalaCheck?

I have read in the ScalaCheck user guide that it's a tool for testing Scala and Java programs.

I wonder, is it just marketing, or testing Java-only codebase with it would be a reasonable idea? And if so, what's the best way to integrate it with the Java project?


Source: (StackOverflow)

ScalaCheck - Ordered array generator

I am trying out ScalaCheck for the first time and I want to generate an ordered array of Ints.

I read the documentation and did some search but I didn't find a way to do it.

Can someone shed some light on this?

Thanks


Source: (StackOverflow)

Generate Strings from Grammar in ScalaCheck

In Scala, I have a grammar implemented using the Parser Combinators library. Now, what I want to do is generate random strings given a grammar from the parser combinators library.

It seems to me, that what the ScalaCheck library does it somehow the opposite of Parser Combinators in that it combines generators instead of parsers.

Is there already a way to generate strings using the Parser Combinators or ScalaCheck, or is there a straightforward way of transforming a Parser Combinator into a generator?


Source: (StackOverflow)

Scalacheck won't properly report the failing case

I've wrote the following spec

"An IP4 address" should "belong to just one class" in {
    val addrs = for {
        a <- Gen.choose(0, 255)
        b <- Gen.choose(0, 255)
        c <- Gen.choose(0, 255)
        d <- Gen.choose(0, 255)
    } yield s"$a.$b.$c.$d"

    forAll (addrs) { ip4s =>
        var c: Int = 0
        if (IP4_ClassA.unapply(ip4s).isDefined) c = c + 1
        if (IP4_ClassB.unapply(ip4s).isDefined) c = c + 1
        if (IP4_ClassC.unapply(ip4s).isDefined) c = c + 1
        if (IP4_ClassD.unapply(ip4s).isDefined) c = c + 1
        if (IP4_ClassE.unapply(ip4s).isDefined) c = c + 1
        c should be (1)
    }
}

That is very clear in its scope.

The test passes successfully but when I force it to fail (for example by commenting out one of the if statements) then ScalaCheck correctly reports the error but the message doesn't mention correctly the actual value used to evaluate the proposition. More specifically I get:

[info] An IP4 address
[info] - should belong to just one class *** FAILED ***
[info]   TestFailedException was thrown during property evaluation.
[info]     Message: 0 was not equal to 1
[info]     Location: (NetSpec.scala:105)
[info]     Occurred when passed generated values (
[info]       arg0 = "" // 4 shrinks
[info]     )

where you can see arg0 = "" // 4 shrinks doesn't show the value.

I've tried to add even a simple println statement to review the cases but the output appears to be trimmed. I get something like this

192.168.0.1
189.168.
189.
1

SOLUTION

import org.scalacheck.Prop.forAllNoShrink
import org.scalatest.prop.Checkers.check

"An IP4 address" should "belong to just one class" in {
  val addrs = for {
    a <- Gen.choose(0, 255)
    b <- Gen.choose(0, 255)
    c <- Gen.choose(0, 255)
    d <- Gen.choose(0, 255)
  } yield s"$a.$b.$c.$d"
  check {
    forAllNoShrink(addrs) { ip4s =>
      var c: Int = 0
      if (IP4.ClassA.unapply(ip4s).isDefined) c = c + 1
      if (IP4.ClassB.unapply(ip4s).isDefined) c = c + 1
      if (IP4.ClassC.unapply(ip4s).isDefined) c = c + 1
      if (IP4.ClassD.unapply(ip4s).isDefined) c = c + 1
      if (IP4.ClassE.unapply(ip4s).isDefined) c = c + 1
      c == (1)
    }
  }
}

Source: (StackOverflow)

Scalacheck is ignoring the provided generators

I'm trying to implement a simple property check but Scalacheck is ignoring my generators. What I'm doing wrong here?

object AlgorithmTest extends Properties("Algorithm") {
  property("Test") = forAll (Gen.choose(0,10)) (n => n>=0 & n<10)
}

and this is the result in SBT

[info] ! Algorithm.Test: Falsified after 12 passed tests. [info] >
ARG_0: -1 [error] Failed: : Total 1, Failed 1, Errors 0, Passed 0,
Skipped 0

Source: (StackOverflow)

scalacheck Arbitrary implicits and recursive generators

I'm seeing what seems to be a very obvious bug with scalacheck, such that if it's really there I can't see how people use it for recursive data structures.

This program fails with a StackOverflowError before scalacheck takes over, while constructing the Arbitrary value. Note that the Tree type and the generator for Trees is taken verbatim from this scalacheck tutorial.

package treegen

import org.scalacheck._
import Prop._

class TreeProperties extends Properties("Tree") {

  trait Tree
  case class Node(left: Tree, right: Tree) extends Tree
  case class Leaf(x: Int) extends Tree

  val ints = Gen.choose(-100, 100)

  def leafs: Gen[Leaf] = for {
    x <- ints
  } yield Leaf(x)

  def nodes: Gen[Node] = for {
    left <- trees
    right <- trees
  } yield Node(left, right)

  def trees: Gen[Tree] = Gen.oneOf(leafs, nodes)

  implicit lazy val arbTree: Arbitrary[Tree] = Arbitrary(trees)

  property("vacuous") = forAll { t: Tree => true }
}

object Main extends App {
  (new TreeProperties).check
}

What's stranger is that changes that shouldn't affect anything seem to alter the program so that it works. For example, if you change the definition of trees to this, it passes without any problem:

  def trees: Gen[Tree] = for {
    x <- Gen.oneOf(0, 1)
    t <- if (x == 0) {leafs} else {nodes}
  } yield t

Even stranger, if you alter the binary tree structure so that the value is stored on Nodes and not on Leafs, and alter the leafs and nodes definition to be:

  def leafs: Gen[Leaf] = Gen.value(Leaf())

  def nodes: Gen[Node] = for {
    x <- ints     // Note: be sure to ask for x first, or it'll StackOverflow later, inside scalacheck code!
    left <- trees
    right <- trees
  } yield Node(left, right, x)

It also then works fine.

What's going on here? Why is constructing the Arbitrary value initially causing a stack overflow? Why does it seem that scalacheck generators are so sensitive to minor changes that shouldn't affect the control flow of the generators?

Why isn't my expression above with the oneOf(0, 1) exactly equivalent to the original oneOf(leafs, nodes) ?


Source: (StackOverflow)

Generate Option[T] in ScalaCheck

I am trying to generate optional parameters in ScalaCheck, without success.

There seems to be no direct mechanism for this. Gen.containerOf[Option, Thing](thingGenerator) fails because it cannot find an implicit Buildable[Thing, Option].

I tried

for {
  thing <- Gen.listOfN[Thing](1, thingGenerator)
} yield thing.headOption

But this doesn't work because listOfN produces a list that is always of length N. As a result I always get a Some[Thing]. Similarly, listOf1 does not work, because (a) it doesn't produce empty lists, but also (b) it is inefficient because I can't set a max limit on the number of elements.

How can I generate Option[Thing] that includes Nones?

EDIT: I have found a solution, but it is not succinct. Is there a better way than this?

for {
  thing <- for {
    qty <- Gen.choose(0,1)
    things <- Gen.listOfN[Thing](qty, thingGenerator)
  } yield things.headOption
} yield thing

EDIT 2: I generalised this to

def optional[T](g: Gen[T]) = 
  for (qty <- Gen.choose(0, 1); xs <- Gen.listOfN[T](qty, g)) yield xs.headOption

So I don't have to write it more than once. But surely this is in the library already and I just missed it?


Source: (StackOverflow)

How to get ScalaCheck's Arbitrary to always generate some special case values?

I'd like to have all my properties always be tested with at least a fixed set of special values in addition to some random values. I'd like to define this in my generator specification, not in every test using that generator type. For instance, if I were generating Ints, I'd like my generator to always generate at least 0, 1 and -1 for each test case. Is this possible?

The best I've come up with so far is to make a sized generator where the smallest n sizes correspond to my n special cases. This is problematic at least because all possible sizes are not tested when the max number of tests is configured to be lower than the max size parameter.


Source: (StackOverflow)

Make ScalaCheck tests deterministic

I would like to make my ScalaCheck property tests in my specs2 test suite deterministic, temporarily, to ease debugging. Right now, different values could be generated each time I re-run the test suite, which makes debugging frustrating, because you don't know if a change in observed behaviour is caused by your code changes, or just by different data being generated.

How can I do this? Is there an official way to set the random seed used by ScalaCheck?

I'm using sbt to run the test suite.

Bonus question: Is there an official way to print out the random seed used by ScalaCheck, so that you can reproduce even a non-deterministic test run?


Source: (StackOverflow)

High-Order ScalaCheck

Consider the following definition of a category:

trait Category[~>[_, _]] {
  def id[A]: A ~> A
  def compose[A, B, C](f: A ~> B)(g: B ~> C): A ~> C
}

Here's an instance for unary functions:

object Category {
  implicit def fCat = new Category[Function1] {
    def id[A] = identity
    def compose[A, B, C](f: A => B)(g: B => C) = g.compose(f)
  }
}

Now, categories are subject to some laws. Relating composition (.) and identity (id):

forall f: categoryArrow -> id . f == f . id == f

I want to test this with ScalaCheck. Let's try for functions over integers:

"Categories" should {
  import Category._

  val intG  = { (_ : Int) - 5 }

  "left identity" ! check {
    forAll { (a: Int) => fCat.compose(fCat.id[Int])(intG)(a) == intG(a) }      
  }

  "right identity" ! check {
    forAll { (a: Int) => fCat.compose(intG)(fCat.id)(a) == intG(a) }      
  }
}

But these are quantified over (i) a specific type (Int), and (ii) a specific function (intG). So here's my question: how far can I go in terms of generalizing the above tests, and how? Or, in other words, would it be possible to create a generator of arbitrary A => B functions, and provide those to ScalaCheck?


Source: (StackOverflow)

Scalacheck/Scalatest with parametric types

I want to test a generic stack with scalatest and scalacheck. So far I have this:

"Stack" should "pop the last value pushed" in {
  check(doPushPop(element))
}

def doPushPop[T](element : T) : Boolean = {
  val stack = new Stack[T]
  stack.push(element)
  stack.pop() == element
}

However this doesn't compile obviously. How do I specify the generic type as part of the test?


Source: (StackOverflow)

Difference between ScalaCheck Arbitrary[T] and Scalacheck Gen[T]

In my tests I am making quite an extensive usage of Specs2 + ScalaCheck and there are some patterns to factor out. I still haven't found out if my functions should use an Arbitrary[T] or a Gen[T], since they are very similar:

sealed abstract class Arbitrary[T] {
  val arbitrary: Gen[T]
}

Would a function signature looks like that:

maxSizedIntervalArbitrary[A,B](implicit ordering:Ordering[A], genStart:Arbitrary[A], genEnd:Arbitrary[B]):Arbitrary[TreeMap[A,B]]

or should I work at the Gen abstraction level?


Source: (StackOverflow)

Scalacheck json and case classes

I'm writing a service that takes a case class and serializes it to json, that I will then send to an instance running Elastic Search.

I'd like scalacheck to generate several case classes, with random missing data, like this:

val searchDescAndBrand = SearchEntry("", "Ac Adapters", "Sony", "", "", "", "", "", "", "", "", "", "", "", "","", "", "", "", "", "", 0L)
val searchBrand = SearchEntry("", ", "Sony", "", "", "", "", "", "", "", "", "", "", "", "","", "", "", "", "", "", 0L)
val searchPartNumberAndBrand = SearchEntry("02DUYT", "", "Sony", "", "", "", "", "", "", "", "", "", "", "", "","", "", "", "", "", "", 0L)

you get the idea, either fill in the values or leave them empty (the last one is a Long type.

This is the easy part, the problem is that the generated json doesn't just omit the "filed", but omits a whole section, for example:

    """
  |{
  |  "from" : 0,
  |  "size" : 10,
  |  "query" : {
  |    "bool" : {
  |      "must" : [
  |        {"match" : {
  |          "description" : {
  |            "query" : "Ac Adapters",
  |            "type" : "phrase"
  |          }
  |        }},
  |        {"match" : {
  |          "brand" : {
  |            "query" : "Sony",
  |            "type" : "phrase"
  |          }
  |        }}
  |      ]
  |    }
  |  }
  |}
  |
""".stripMargin)

if I had a case class with the first 3 fields with data, the json would be:

    """
  |{
  |  "from" : 0,
  |  "size" : 10,
  |  "query" : {
  |    "bool" : {
  |      "must" : [
  |        {"match" : {
  |          "part_number" : {
  |            "query" : "02D875",
  |            "type" : "phrase"
  |          }
  |        }},
  |        {"match" : {
  |          "description" : {
  |            "query" : "Ac Adapters",
  |            "type" : "phrase"
  |          }
  |        }},
  |        {"match" : {
  |          "brand" : {
  |            "query" : "Sony",
  |            "type" : "phrase"
  |          }
  |        }}
  |      ]
  |    }
  |  }
  |}
  |
""".stripMargin)

So, in short, having a value means adding

{"match" : {
  |          "<specific name here, based on which value we have>" : {
  |            "query" : "<value from scalacheck>",
  |            "type" : "phrase"
  |          }
  |        }}

to the result.

How would you handle such a use case?


Source: (StackOverflow)

Can ScalaCheck/Specs warnings safely be ignored when using SBT with ScalaTest?

I have a simple FunSuite-based ScalaTest:

package pdbartlett.hello_sbt                                                                        

import org.scalatest.FunSuite                                                                       

class SanityTest extends FunSuite {                                                                 

  test("a simple test") {                                                                           
    assert(true)                                                                                    
  }                                                                                                 

  test("a very slightly more complicated test - purposely fails") {                                 
    assert(42 === (6 * 9))                                                                          
  }                                                                                                 
}

Which I'm running with the following SBT project config:

import sbt._                                                                                        

class HelloSbtProject(info: ProjectInfo) extends DefaultProject(info) {                             

  // Dummy action, just to show config working OK.                                                  
  lazy val solveQ = task { println("42"); None }                                                    

  // Managed dependencies                                                                           
  val scalatest = "org.scalatest" % "scalatest" % "1.0" % "test"                                    
}

However, when I runsbt test I get the following warnings:

...
[info] == test-compile ==
[info]   Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed.
[info] Compiling test sources...
[info] Nothing to compile.
[warn] Could not load superclass 'org.scalacheck.Properties' : java.lang.ClassNotFoundException: org.scalacheck.Properties
[warn] Could not load superclass 'org.specs.Specification' : java.lang.ClassNotFoundException: org.specs.Specification
[warn] Could not load superclass 'org.specs.Specification' : java.lang.ClassNotFoundException: org.specs.Specification
[info]   Post-analysis: 3 classes.
[info] == test-compile ==
...

For the moment I'm assuming these are just "noise" (caused by the unified test interface?) and that I can safely ignore them. But it is slightly annoying to some inner OCD part of me (though not so annoying that I'm prepared to add dependencies for the other frameworks).

Is this a correct assumption, or are there subtle errors in my test/config code? If it is safe to ignore, is there any other way to suppress these errors, or do people routinely include all three frameworks so they can pick and choose the best approach for different tests?

TIA, Paul.

(ADDED: scala v2.7.7 and sbt v0.7.4)


Source: (StackOverflow)

In the specs2 framework, why does using a Scope prevent execution of a forAll quantifier?

In the code below, how can I make Specs2 execute the first test? The "print ones" test passes when it should fail. The code inside the forAll() section is not executing because of the new Scope.

The println statements are only for tracing output. Please let me know if you see any lines starting with "one".

The empty Scope is just for demonstrating the problem. This is stripped-down from code where I actually use variables in a Scope.

import org.scalacheck.Gen
import org.scalacheck.Prop._
import org.specs2.ScalaCheck
import org.specs2.mutable.Specification
import org.specs2.specification.Scope

class TestingSpec extends Specification with ScalaCheck {
  "sample test" should {
    "print ones" in new Scope { 
      println("The first test passes, but should fail")
      forAll(Gen.posNum[Int]) { (x: Int) =>
          println("one: " + x)
          (0 < x) mustNotEqual true
      } 
    } 

    "print twos" in { 
      println("The second test passes and prints twos")
      forAll(Gen.posNum[Int]) { (x: Int) =>
        println("two: " + x)
        (0 < x) mustEqual true
      } 
    } 
  }
}

Here is my output:

sbt> testOnly TestingSpec
The second test passes and prints twos
The first test passes, but should fail
two: 1
two: 2
two: 1
    ...
two: 50
two: 34
two: 41
[info] TestingSpec
[info] 
[info] sample test should
[info]   + print ones
[info]   + print twos
[info] 
[info] Total for specification TestingSpec
[info] Finished in 96 ms
[info] 2 examples, 101 expectations, 0 failure, 0 error
[info] 
[info] Passed: Total 2, Failed 0, Errors 0, Passed 2
[success] Total time: 3 s, completed Apr 28, 2015 3:14:15 PM

P.S. I updated my project dependency from version 2.4.15 to specs2 3.5. Still has this issue...


Source: (StackOverflow)