The core infrastructure backend (API, database, etc). Free Open Source Password Manager | bitwarden

Removing "case" with duplicate branches from Haskell's Core

I have a piece of Haskell code that looks like this:

fst . f $ (Z :. i `div` 2)

Z and :. are taken from Repa library and are defined like this:

data Z = Z deriving (Show, Read, Eq, Ord)
infixl 3 :. 
data tail :. head = !tail :. !head deriving (Show, Read, Eq, Ord)

The expression right of $ defines an array index, while f is a function that takes that index and returns a pair. This compiles to following Core:

case f_a2pC
       (case ># x_s32E 0 of _ {
          False ->
            case <# x_s32E 0 of _ {
              False -> :. Z (I# (quotInt# x_s32E 2));
              True -> :. Z (I# (-# (quotInt# (+# x_s32E 1) 2) 1))
          True ->
            case <# x_s32E 0 of _ {
              False -> :. Z (I# (quotInt# x_s32E 2));
              True -> :. Z (I# (-# (quotInt# (+# x_s32E 1) 2) 1))
of _ { (x1_a2Cv, _) ->

To me it seems obvious (perhaps incorrectly) that the middle case statement (the one with ># x_s32E 0 as scrutinee) is redundant, as both branches are identical. Is there anything I can do to get rid of it? I compile my code using GHC options recommended in Repa documentation: -O2 -Odph -fno-liberate-case -funfolding-use-threshold1000 -funfolding-keeness-factor1000

Source: (StackOverflow)

Reload Solr core with curl

I'm trying to reload the Solr core (version 3.6.0) by means of the following sentence:

curl http://localhost:8983/solr/admin/cores?action=RELOAD\&core=mycore

When I execute it, I get the following response:

<?xml version="1.0" encoding="UTF-8"?>
   <lst name="responseHeader">
      <int name="status">0</int>
      <int name="QTime">1316</int>

I get a similar response when I put such URL at my browser (the difference is the value of QTime).

My problem is that, if I call to the URL from the browser I can see at the log information that the reload is executed, but if I call it from the CURL statement, I can't see anything at my log info (that is to say, no reload process has been executed).

Do I have to change some config data? It seems like the call is not arriving to the Solr server...

Source: (StackOverflow)

Setting process core quotas with C++

If you write software where the customer pays for the number of CPU cores the software uses, then what would be the best way of achiving this in your C++ code? My research so far has led me to use SetProcessAffinityMask on Windows and sched_setaffinity on POSIX systems.

Source: (StackOverflow)

Tomcat SOLR multiple cores setup

I have spend all morning trying to set up multiple cores on a SOLR installation that runs under Apache Tomcat server without success. My solr.xml looks like this:

<solr persistent="false" sharedLib="lib">
  <cores adminPath="/admin/cores">
    <core name="core0" instanceDir="/multicore/core0">   
        <property name="dataDir" value="/multicore/core0/data" />
    <core name="core1" instanceDir="/multicore/core1">
        <property name="dataDir" value="/multicore/core1/data" />

What is the correct directory structure? Do I need to do change something in the solrconfig.xml?

Source: (StackOverflow)

Elasticsearch : when to set omit_norms option as false

What is a good use case of the omit_norms option in elasticsearch? I could not find adequate explanation in es website.

Source: (StackOverflow)

Coredump is getting truncated

I am setting

ulimit -c unlimited. 

And in c++ program we are doing

struct rlimit corelimit;
  if (getrlimit(RLIMIT_CORE, &corelimit) != 0) {
    return -1;
  corelimit.rlim_cur = RLIM_INFINITY;
  corelimit.rlim_max = RLIM_INFINITY;
  if (setrlimit(RLIMIT_CORE, &corelimit) != 0) {
    return -1;

but whenever program is getting crashed the core dump generated by it is getting truncated.

BFD: Warning: /mnt/coredump/core.6685.1325912972 is truncated: expected core file size >= 1136525312, found: 638976.

What can be the issue ?

We are using Ubuntu 10.04.3 LTS

Linux ip-<ip> 2.6.32-318-ec2 #38-Ubuntu SMP Thu Sep 1 18:09:30 UTC 2011 x86_64 GNU/Linux

This is my /etc/security/limits.conf

# /etc/security/limits.conf
#Each line describes a limit for a user in the form:
#<domain>        <type>  <item>  <value>
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#        - NOTE: group and wildcard limits are not applied to root.
#          To apply a limit to the root user, <domain> must be
#          the literal username root.
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#        - chroot - change root to directory (Debian-specific)
#<domain>      <type>  <item>         <value>

#*               soft    core            0
#root            hard    core            100000
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#    ftp             -       chroot          /ftp
#@student        -       maxlogins       4

#for all users
* hard nofile 16384
* soft nofile 9000

More Details

I am using gcc optimization flag


I am setting stack thread size to .5 mb.

Source: (StackOverflow)

core file size limit has non-deterministic effects on processes

I'm running a custom 2.6.27 kernel and I just noticed the core files produced during a segfault are larger than the hard core file size limit set for processes.

And what makes it weirder is that the core file is only sometimes truncated (but not to the limit set by ulimit).

For example, this is the program I will try and crash below:

int main(int argc, char **argv)
    // Get the hard and soft limit from command line
    struct rlimit new = {atoi(argv[1]), atoi(argv[1])};

    // Create some memory so as to beef up the core file size
    void *p = malloc(10 * 1024 * 1024);

    if (!p)
        return 1;

    if (setrlimit(RLIMIT_CORE, &new)) // Set the hard and soft limit
        return 2;                     // for core files produced by this
                                      // process

    while (1);

    return 0;

And here's the execution:

Linux# ./a.out 1446462 &    ## Set hard and soft limit to ~1.4 MB
[1] 14802
Linux# ./a.out 1446462 &
[2] 14803
Linux# ./a.out 1446462 &
[3] 14804
Linux# ./a.out 1446462 &
[4] 14807

Linux# cat /proc/14802/limits | grep core
Max core file size        1446462              1446462              bytes

Linux# killall -QUIT a.out

Linux# ls -l
total 15708
-rwxr-xr-x 1 root root     4624 Aug  1 18:28 a.out
-rw------- 1 root root 12013568 Aug  1 18:39 core.14802         <=== truncated core
-rw------- 1 root root 12017664 Aug  1 18:39 core.14803
-rw------- 1 root root 12013568 Aug  1 18:39 core.14804         <=== truncated core
-rw------- 1 root root 12017664 Aug  1 18:39 core.14807
[1]   Quit                    (core dumped) ./a.out 1446462
[2]   Quit                    (core dumped) ./a.out 1446462
[3]   Quit                    (core dumped) ./a.out 1446462
[4]   Quit                    (core dumped) ./a.out 1446462

So multiple things happened here. I set the hard limit for each process to be about 1.4 MB.

  1. The core files produced well exceed this set limit. Why?
  2. And 2 of the 4 core file produced are truncated, but by exactly 4096 bytes. What's going on here?

I know the core file contains, among other things, the full stack and heap memory allocated. Shouldn't that be pretty much constant for such a simple program (give or take a few bytes at the most), hence producing a consistent core between multiple instances?


1 The requested output of du

Linux# du core.*
1428    core.14802
1428    core.14803
1428    core.14804
1428    core.14807

Linux# du -b core.*
12013568    core.14802
12017664    core.14803
12013568    core.14804
12017664    core.14807

2 Adding memset() after malloc() definitely reigned things in, in that the core file are now all truncated to 1449984 (still 3522 bytes over the limit).

So why were the cores so big before, what did they contain? Whatever it was, it wasn't subjected to the process' limits.

3 The new program shows some interesting behaviour as well:

Linux# ./a.out 12017664 &
[1] 26586
Linux# ./a.out 12017664 &
[2] 26589
Linux# ./a.out 12017664 &
[3] 26590
Linux# ./a.out 12017663 &        ## 1 byte smaller
[4] 26653
Linux# ./a.out 12017663 &        ## 1 byte smaller
[5] 26666
Linux# ./a.out 12017663 &        ## 1 byte smaller
[6] 26667

Linux# killall -QUIT a.out

Linux# ls -l
total ..
-rwxr-xr-x 1 root root     4742 Aug  1 19:47 a.out
-rw------- 1 root root 12017664 Aug  1 19:47 core.26586
-rw------- 1 root root 12017664 Aug  1 19:47 core.26589
-rw------- 1 root root 12017664 Aug  1 19:47 core.26590
-rw------- 1 root root  1994752 Aug  1 19:47 core.26653           <== ???
-rw------- 1 root root  9875456 Aug  1 19:47 core.26666           <== ???
-rw------- 1 root root  9707520 Aug  1 19:47 core.26667           <== ???
[1]   Quit                    (core dumped) ./a.out 12017664
[2]   Quit                    (core dumped) ./a.out 12017664
[3]   Quit                    (core dumped) ./a.out 12017664
[4]   Quit                    (core dumped) ./a.out 12017663
[5]   Quit                    (core dumped) ./a.out 12017663
[6]   Quit                    (core dumped) ./a.out 12017663

Source: (StackOverflow)

GHC Generating Redundant Core Operations

I have the following program for converting 6bit ASCII to binary format.

ascii2bin :: Char -> B.ByteString
ascii2bin = B.reverse . fst . B.unfoldrN 6 decomp . to6BitASCII -- replace to6BitASCII with ord if you want to compile this
    where decomp n = case quotRem n 2 of (q,r) -> Just (chr r,q)

bs2bin :: B.ByteString -> B.ByteString
bs2bin = B.concatMap ascii2bin

this produces the following core segment:

Rec {
$wa =
  \ ww ww1 ww2 w ->
    case ww2 of wild {
      __DEFAULT ->
        let {
          wild2 = remInt# ww1 2 } in
        case leWord# (int2Word# wild2) (__word 1114111) of _ { 
          False -> (lvl2 wild2) `cast` ...;                                                                                   
          True ->
            case writeWord8OffAddr#
                   ww 0 (narrow8Word# (int2Word# (ord# (chr# wild2)))) w
            of s2 { __DEFAULT ->
            $wa (plusAddr# ww 1) (quotInt# ww1 2) (+# wild 1) s2
      6 -> (# w, (lvl, lvl1, Just (I# ww1)) #)
end Rec }

notice that ord . chr == id, and so there is a redundant operation here: narrow8Word# (int2Word# (ord# (chr# wild2)))

Is there a reason GHC is needlessly converting from Int -> Char -> Int, or is this an example of poor code generation? Can this be optimized out?

EDIT: This is using GHC 7.4.2, I have not tried compiling with any other version. I have since found the problem remains in GHC 7.6.2, but the redundant operations are removed in the current HEAD branch on github.

Source: (StackOverflow)

CORS in .NET Core

I am trying to enable CORS in .NET Core in this way:

    public IConfigurationRoot Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
        services.AddCors(options => options.AddPolicy("AllowAll", p => p.AllowAnyOrigin()

    public void Configure(IApplicationBuilder app)

        app.UseMvc(routes =>
                 name: "default",
                 template: "{controller=Home}/{action=Index}/{id?}");


However, when I am sending a request to my app with Angular 2 I am getting the famous

"No 'Access-Control-Allow-Origin' header is present on the requested resource."

error message.

I am also using Windows Authentication + WebListener. If I am checking with postman the only response headers are:

Content-Length →3533 Content-Type →application/json; charset=utf-8 Date →Fri, 14 Oct 2016 12:17:57 GMT Server →Microsoft-HTTPAPI/2.0

So there must be still something wrong configured. Any proposals?

If I remove the outcommented line it works, but I need Windows Authentication :-(

        var host = new WebHostBuilder()
            //.UseWebListener(options => options.Listener.AuthenticationManager.AuthenticationSchemes = AuthenticationSchemes.NTLM)

Source: (StackOverflow)

Current Linux Kernel debugging techniques

A linux machine freezes few hours after booting and running software (including custom drivers). I'm looking a method to debug such problem. Recently, there has been significant progress in Linux Kernel debugging techniques, hasn't it?

I kindly ask to share some experience on the topic.

Source: (StackOverflow)

delete temporary file in java

I'm creating temporary file in java but i'm unable to delete it. This is the code I have written:

temp = File.createTempFile("temp", ".txt");
fileoutput = new FileWriter(temp);
buffout = new BufferedWriter(fileoutput);

Source: (StackOverflow)

How to create new core in Solr 5?

Currently we are using Apache Solr 4.10.3 OR Heliosearch Distribution for Solr [HDS] as a search engine to index our data.

Now after that, I got the news about Apache Solr 5.0.0 release in last month. I'd successfully installed Apache Solr 5.0.0 version and now its running properly on 8983 port (means only running solr but unable to create core). In that UI, I'm unable to find the example core as well as schema or config files under it. So, I started creating new core as we create in old versions but unable to create one. Following is the error, I'm getting it:

Error CREATEing SolrCore 'testcore1': Unable to create core [testcore1] Caused by: Could not find configName for collection testcore1 found:null

Note: I also seen Cloud tab on (ie. http://localhost:8983/solr/) left side of Solr UI and also don't know how it works? Meaning I don't know the location of the schema.xml, solrconfig.xml files due to lack of example folder (Collection1) and how to update those files?

Is there any useful document or solution available to solve this error?

Source: (StackOverflow)

How to access the private variables of a class in its subclass?

This is a question I was asked in an interview: I have class A with private members and Class B extends A. I know private members of a class cannot be accessed, but the question is: I need to access private members of class A from class B, rather than create variables with the same value in class B.

Source: (StackOverflow)

What is the use of static synchronized method in java? [duplicate]

This question already has an answer here:

I have one question in my mind , I read static synchronized method locked on class object and synchronized method locks on current instance of an object.So Whats the meaning of locked on class object ?

Can anyone please help me on this topic ?

Source: (StackOverflow)

Treeset to order elements in descending order

Here is the piece of code that I have used for Java 5.0

TreeSet<Integer> treeSetObj = new TreeSet<Integer>( Collections.reverseOrder() ) ;

Collections.reverseOrder() is used to obtain a comparator in order to reverse the way the elements are stored and iterated.

Is there a more optimized way of doing it?

Source: (StackOverflow)