| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* final comit
changing using mappedbytebuffer
changes before using unsafe address
using unsafe
* using graalvm,correct unsafe mem implementation
---------
Co-authored-by: Karthikeyans <karthikeyan.sn@zohocorp.com>
|
| |
|
|
|
|
|
|
|
| |
* Inline parsing name and station to avoid constantly updating the offset field (-100ms)
* Remove Worker class, inline the logic into lambda
* Accumulate results in an int matrix instead of using result row (-50ms)
* Use native image
|
| |
|
|
| |
* Uses vector api for city name parsing and for hash index collision resolution
* Uses lookup tables for temperature parsing
|
| |
|
|
|
|
| |
- inline computeIfAbsent
- replace arraycopy by copyOfRange
Co-authored-by: Yann Moisan <yann@zen.ly>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Deploy v2 for parkertimmins
Main changes:
- fix hash which masked incorrectly
- do station equality check in simd
- make station array length multiple of 32
- search for newline rather than semicolon
* Fix bug - entries were being skipped between batches
At the boundary between two batches, the first batch would stop after
crossing a limit with a padding of 200 characters applied. The next
batch should then start looking for the first full entry after the
padding. This padding logic had been removed when starting a batch. For
this reason, entries starting in the 200 character padding between
batches were skipped.
|
| |
|
| |
parse value before going to map
|
| |
|
|
|
|
|
|
|
|
|
| |
* First optimal attempt
* Removing debug lines
* Using default string equals method
---------
Co-authored-by: Gaurav Deshmukh <deshmgau@amazon.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* b1rc challenge
* fixed a rounding error
* added the file back
* fixed file
* removed a file
---------
Co-authored-by: Jeevjyot Singh Chhabda <jeevjyotsinghchhabda@Jeevjyots-MBP.hsd1.ca.comcast.net>
|
| |
|
|
|
|
|
|
|
|
|
| |
* fast-path for keys<16 bytes
* fix off by one error
the mask is wrong for he 2nd word when len == 16
* less chunks per thread
seems like compact code wins. on my test box anyway.
|
| | |
|
| | |
|
| |
|
| |
Co-authored-by: Ian Preston <ianopolous@protonmail.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* tonivade implementation
* synchronized block performs better than ReentrantLock
* remove ConcurrentHashMap
* refactor
* use HashMap.newHashMap
* change double to int
* minor refactor
* fix
|
| |
|
|
| |
position processing branches. This provides a small but noticeable speed-up. It also expands and obfuscates the code, unfortunately. (#563)
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* Some clean up, small-scale tuning, and reduce complexity when handling longer names.
* Do actual work in worker subprocess. Main process returns immediately
and OS clean up of the mmap continues in the subprocess.
* Update minor Graal version after CPU release.
* Turn GC back to epsilon GC (although it does not seem to make a
difference).
* Minor tuning for another +1%.
|
| | |
|
| |
|
|
| |
Ensure 8-byte alignment in key buffer for faster comparisons. (#523)
|
| |
|
|
|
|
|
|
|
|
|
| |
* Reduce allocations
* Shrink the heap size
* Calculate hash when reading name (50-100ms difference)
* no need to reverse bytes
* bump heap size
|
| | |
|
| | |
|
| |
|
|
|
| |
- It avoids creating unnecessary Strings objects and handles with the station names with its djb2 hashes instead
- Initializes hashmaps with capacity and load factor
- Adds -XX:+AlwaysPreTouch
|
| |
|
| |
Co-authored-by: Giovanni Cuccu <gcuccu@imolainformatica.it>
|
| |
|
|
|
|
|
|
|
| |
* 0xshivamagarwal implementation
* .
---------
Co-authored-by: Shivam Agarwal <>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* final comit
changing using mappedbytebuffer
changes before using unsafe address
using unsafe
* using graalvm,correct unsafe mem implementation
---------
Co-authored-by: Karthikeyans <karthikeyan.sn@zohocorp.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
* files created by create_fork.sh
* use indexOf
* improved implementation based on rafaelmerino
---------
Co-authored-by: Yann Moisan <yann@zen.ly>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Version 3
* Use SWAR algorithm from netty for finding a symbol in a string
* Faster equals - store the remainder in a long field (- 0.5s)
* optimise parsing numbers - prep
* Keep tweaking parsing logic
* Rewrote number parsing
may be a tiby bit faster it at all
* Epsilon GC
|
| |
|
| |
Co-authored-by: Ian Preston <ianopolous@protonmail.com>
|
| |
|
|
|
|
|
| |
* refactoring
* segregated heap for names
also a different hashing function. turns out hashing just first word is good enough
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
solution (#499)
* 1bc challenge, but one that will run using jdk 8 without unsafe and still do reasonably well.
* Better hashtable
* the fastest GC is no GC
* cleanups
* increased hash size
* removed Playground.java
* collision-handling allocation free hashmap
* formatting
|
| | |
|
| | |
|
| |
|
| |
Started running perf, perhaps this helps. No idea how to use it yet
|
| | |
|
| | |
|
| |
|
| |
plain old io
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on automatic closing of ByteBuffers.. previously, a straggler could hold
up closing the ByteBuffers.
Also
- Improve Tracing code
- Parametrize additional options to aid in tuning
Our previous PR was surprising; parallelizing munmap() call did not
yield anywhere near the performance gain I expected. Local machine had
10% gain while testing machine only showed 2% gain. I am still not clear
why it happened and the two best theories I have are
1) Variance due to stragglers (that this change addresses)
2) munmap() is either too fast or too slow relative to the other
instructions compared to our local machine. I don't know which. We'll
have to use adaptive tuning, but that's in a different change.
|
| | |
|
| | |
|
| |
|
|
|
| |
* Use Memory Segment
* Reduce Number of threads
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* v1 - Initial prompt
* Introduce Records
* v1 - Initial prompt
* v2 - Introduce Records
* v3 - Improves code
* v4 - Improves JVM parameter
* GitHub Copilot Chat with the help of agoncal
* Format
* Pass measurements-rounding
* Added prepare script
|
| |
|
|
|
|
|
|
|
|
| |
* Version 3
* trying to optimize memory access (-0.2s)
- use smaller segments confined to thread
- unload in parallel
* Only call MemorySegment.address() once (~200ms)
|
| |
|
|
| |
* use Arena and MemorySegment to map entire file at once
* reduced branches and instructions
|
| | |
|
| |
|
|
|
| |
* use bits magic
* apply style
|
| |
|
|
|
|
|
| |
* jparera's initial implementation
* Fixes bugs and improves performance for measurements3.txt
* Allows measurements.txt ending without a LF
|
| |
|
|
|
| |
'pull' mery kitty number parsing code
try out tonne of flags (found via trial and error on my system)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
based on local testing; no Unsafe; no bitwise tricks yet (#465)
* Squashing a bunch of commits together.
Commit#2; Uplift of 7% using native byteorder from ByteBuffer.
Commit#1: Minor changes to formatting.
* Commit #4: Parallelize munmap() and reduce completion time further by
10%. As the jvm exits with exit(0) syscall, the kernel reclaims the
memory mappings via munmap() call. Prior to this change. all the unmap()
calls were happening right at the end as the JVM exited. This led to
serial execution of about 350ms out of 2500 ms right at the end after
each shard completed its work. We can parallelize it by exposing the
Cleaner from MappedByteBuffer and then ensure that it is truly parallel
execution of munmap() by using a non-blocking lock (SeqLock). The
optimal strategy for when each thread must call unmap() is an interesting math problem with an exact solution and this code roughly reflects it.
Commit #3: Tried out reading long at a time from bytebuffer and
checking for presence of ';'.. it was slower compared to just reading int().
Removed the code for reading longs; just retaining the
hasSemicolonByte(..) check code
Commit #2: Introduce processLineSlow() and processRangeSlow() for the
tial part.
Commit #1: Create a separate tail piece of work for the last few lines to be
processed separately from the main loop. This allows the main loop to
read past its allocated range (by a 'long' if we reserve atleast 8 bytes
for the tail piece of work.)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
using a state machine to parse the file (#466)
* Golang implementation
* Speed up by avoiding copying the lines
* Memory mapping
* Add script for testing
* Now passing most of the tests
* Refactor to composed method
* Now using integer math throughout
* Now using a state machine for parsing!
* Refactoring state names
* Enabling profiling
* Running in parallel!
* Fully parallel!
* Refactor
* Improve type safety of methods
* The rounding problem is due to difference between Javas and Gos printf implementation
* Converting my solution to Java
* Merging results
* Splitting the file in several buffers
* Made it parallel!
* Removed test file
* Removed go implementation
* Removed unused files
* Add header to .sh file
---------
Co-authored-by: Matteo Vaccari <mvaccari@thoughtworks.com>
|