| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
plain old io
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on automatic closing of ByteBuffers.. previously, a straggler could hold
up closing the ByteBuffers.
Also
- Improve Tracing code
- Parametrize additional options to aid in tuning
Our previous PR was surprising; parallelizing munmap() call did not
yield anywhere near the performance gain I expected. Local machine had
10% gain while testing machine only showed 2% gain. I am still not clear
why it happened and the two best theories I have are
1) Variance due to stragglers (that this change addresses)
2) munmap() is either too fast or too slow relative to the other
instructions compared to our local machine. I don't know which. We'll
have to use adaptive tuning, but that's in a different change.
|
| | |
|
| | |
|
| |
|
|
|
| |
* Use Memory Segment
* Reduce Number of threads
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* v1 - Initial prompt
* Introduce Records
* v1 - Initial prompt
* v2 - Introduce Records
* v3 - Improves code
* v4 - Improves JVM parameter
* GitHub Copilot Chat with the help of agoncal
* Format
* Pass measurements-rounding
* Added prepare script
|
| |
|
|
|
|
|
|
|
|
| |
* Version 3
* trying to optimize memory access (-0.2s)
- use smaller segments confined to thread
- unload in parallel
* Only call MemorySegment.address() once (~200ms)
|
| |
|
|
| |
* use Arena and MemorySegment to map entire file at once
* reduced branches and instructions
|
| | |
|
| |
|
|
|
| |
* use bits magic
* apply style
|
| |
|
|
|
|
|
| |
* jparera's initial implementation
* Fixes bugs and improves performance for measurements3.txt
* Allows measurements.txt ending without a LF
|
| |
|
|
|
| |
'pull' mery kitty number parsing code
try out tonne of flags (found via trial and error on my system)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
based on local testing; no Unsafe; no bitwise tricks yet (#465)
* Squashing a bunch of commits together.
Commit#2; Uplift of 7% using native byteorder from ByteBuffer.
Commit#1: Minor changes to formatting.
* Commit #4: Parallelize munmap() and reduce completion time further by
10%. As the jvm exits with exit(0) syscall, the kernel reclaims the
memory mappings via munmap() call. Prior to this change. all the unmap()
calls were happening right at the end as the JVM exited. This led to
serial execution of about 350ms out of 2500 ms right at the end after
each shard completed its work. We can parallelize it by exposing the
Cleaner from MappedByteBuffer and then ensure that it is truly parallel
execution of munmap() by using a non-blocking lock (SeqLock). The
optimal strategy for when each thread must call unmap() is an interesting math problem with an exact solution and this code roughly reflects it.
Commit #3: Tried out reading long at a time from bytebuffer and
checking for presence of ';'.. it was slower compared to just reading int().
Removed the code for reading longs; just retaining the
hasSemicolonByte(..) check code
Commit #2: Introduce processLineSlow() and processRangeSlow() for the
tial part.
Commit #1: Create a separate tail piece of work for the last few lines to be
processed separately from the main loop. This allows the main loop to
read past its allocated range (by a 'long' if we reserve atleast 8 bytes
for the tail piece of work.)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
using a state machine to parse the file (#466)
* Golang implementation
* Speed up by avoiding copying the lines
* Memory mapping
* Add script for testing
* Now passing most of the tests
* Refactor to composed method
* Now using integer math throughout
* Now using a state machine for parsing!
* Refactoring state names
* Enabling profiling
* Running in parallel!
* Fully parallel!
* Refactor
* Improve type safety of methods
* The rounding problem is due to difference between Javas and Gos printf implementation
* Converting my solution to Java
* Merging results
* Splitting the file in several buffers
* Made it parallel!
* Removed test file
* Removed go implementation
* Removed unused files
* Add header to .sh file
---------
Co-authored-by: Matteo Vaccari <mvaccari@thoughtworks.com>
|
| |
|
|
|
|
|
| |
* Initial commit trying out multiple things
* Clean up code
* Fix rounding error to fix failing test
|
| |
|
| |
Co-authored-by: Giedrius D <d.giedrius@gmail.com>
|
| | |
|
| | |
|
| |
|
| |
also a bunch of smaller improvements
|
| | |
|
| | |
|
| |
|
| |
Co-authored-by: Karthikeyans <karthikeyan.sn@zohocorp.com>
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Modify baseline version to improve performance
- Consume and process stream in parallel with memory map buffers, parsing it directly
- Use int instead of float/double to store values
- Use Epsilon GC and graal
* Update src/main/java/dev/morling/onebrc/CalculateAverage_adriacabeza.java
* Update calculate_average_adriacabeza.sh
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
* - Read file in multiple threads if available: 17" -> 15" locally
- Changed String to BytesText with cache: 12" locally
* - Fixed bug
- BytesText to Text
- More checks when reading the file
* - Combining measurements should be thread safe
- More readability changes
|
| | |
|
| |
|
| |
Co-authored-by: Keshavram Kuduwa <keshavram.kuduwa@apptware.com>
|
| |
|
|
|
| |
fix masking
fix masking
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* create calculate average frd
* rename to mach github username
* add licesnce header
* make script executable
---------
Co-authored-by: Farid Mammadov <farid.mammadov@simbrella.com>
|
| | |
|
| |
|
|
|
| |
* Improve scheduling for another 6%.
* Tune hash function and collision handling.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* Initial version
* Small result merge optimisation
* Switched from reading bytes to longs
* Reading into internal buffer, test fixes
* Licence and minor string creation optimisation
* Hash collision fix
|
| |
|
|
|
| |
* rethink chunking
* fix typo
|
| |
|
|
|
|
| |
Commit#2; Uplift of 7% using native byteorder from ByteBuffer.
Commit#1: Minor changes to formatting.
Co-authored-by: vemana <vemana.github@gmail.com>
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Initial commit with custom implementation, 2:40
* Initial file-channel based version, 1:27
* Individual maps for executors, 0:54
* Use better-suited map: 0:34
* Verified correct, skip CharBuffer, :37
* Minor improvements and cleanup, 0:24
* String to byte[], 0:21
* Additional cleanup, use GraalVM, 0:17
* Faster number handling, 0:11
* Faster buffer reading, 0:08
* Prepare for environment with variable RAM and CPU, 0:08
* Fix bug causing issues with certain buffer sizes
* Larger overhead to not miss long station names that overlap buffers
* Reorder scripts and fix one-off bug
|
| | |
|
| |
|
|
|
|
|
|
|
| |
* initial version
let's exploit that superscalar beauty!
* give credits where credits is due
also: added ideas I don't want to forget
|
| |
|
|
|
|
|
|
|
| |
* use all CPUs
* use graal
* optimized with less constructor arg
* optimized with low collision mixer
|
| |
|
|
|
|
|
|
|
| |
* Remove commented-out params from the script
* General cleanup and refactoring
* Deoptimize parseTemperatureSimple
* Optimize nameEquals()
|
| |
|
|
|
| |
- custom hashmap
- avoid string creation
- use graal
|
| | |
|
| |
|
|
|
| |
for city names are not allocated with each row. (#323)
Co-authored-by: Bruno Felix <bruno.felix@klarna.com>
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Implementation that uses the Vector API for the following
- scan for separators
- calculate hash
- n-way lookup in hash table
- parse digits
e; fix queue size
|
| |
|
|
|
|
|
|
|
|
|
| |
* Submission #1
* Submission #1 (Fixed casing of file names)
* Submission #1 (Added executable to Git permissions)
* Submission 1 (Fixed incorrect map size)
* Submission 1 (Fixed output problems on Windows)
|