coldcode 14 hours ago

It's so interesting to see how much of a commodity charting/graphing has become. When we started building Deltagraph in late 1988, what we made become a kind of standard since we targeted Postscript and Illustrator output, and included almost every kind of chart we could find with ridiculous options for everything, so people used it world wide, especially if targeting print. In the mid-90's, it was sold by the publisher (we just did the dev), and it spent the next 25 years at various owners before dying during the pandemic, all still based on the original source code (C) I started. I can't imagine how bad the code looked by then...

  • dcreater 13 hours ago

    And yet it's still not sufficiently commoditized and widespread. The majority of the working force is using proprietary solutions that are out of date - Tableau, JMP in HW engineering, SAS and Excel

SubiculumCode 14 hours ago

Sure ggplot, for example, is finicky, and you need to fuss over it to get the look you are wanting, but then again, it is very flexible. Most of these solutions get frustrating as soon as you want to do, for example, spaghetti plots of within subject repeated measures using age (not time-point) of accelerated longitudinal design data, with fixed effect plots on top. e.g. this plot of mine [1] [1] https://imgur.com/a/gw2vV7w

  • nxobject 11 hours ago

    I just needed to stop and say: as a biostatistician, boy do I love a beautiful complex longitudinal design: I remember my old professor asking us how at this point we would decompose into cross-sectional and longitudinal effects, Lord's paradox, etc... and I still don't fully understand Lord's paradox as well as I should.

    • SubiculumCode 9 hours ago

      This is a very important idea. For example, one issue with accelerated longitudinal designs, see image [1], is that while they efficiently cover a larger age range, the fixed effects of age are largely driven by cross-sectional differences between who is samples are younger and older ages. One method that can be used to test whether the pattern seen in the fixed effects represents the pattern within subjects is to decompose within and between effects of age. For example, you can create a non-time-varying variable like age at first visit (starting_age), and then a within subject variable change in age since first visit, which would be zero at the first visit (age1-age1=0, age2-age1 for change of age between visit 2 and visit 1, age3-age1, for change in age between 3rd visit and first visit), calling it dage. Then in the mixed model, test for an interaction between starting_age:dage. If you have an interaction, then you know that the within subject effect of change in age is different depending on how old you were when you started. I got this from Lesa Hoffman's freely available lectures [2], particularly [3][4], and now I discovered she recently published [5], which I should read.

      [1] https://e-m-mccormick.github.io/static/longitudinal-primer/l... [2] https://www.lesahoffman.com/ [3] https://www.lesahoffman.com/PSYC944/944_Lecture11_Alt_Time.p... [4] https://www.lesahoffman.com/Workshops/SMiP_Presentation_June... [5] https://www.tandfonline.com/doi/full/10.1080/00273171.2025.2...

    • SubiculumCode 8 hours ago

      And thank you for the reminder of Lord's paradox. I should refresh myself.

tau255 10 hours ago

I used SciDavis a lot and before that tried QtiPlot. When I had a chance to I used Origin. SciDavis was clunky and had some issues (liked to crash) but it worked well enough for what I wanted. Had some problems with setting plots styles, maybe it was just me but it wasn't obvious how to copy style between plots.

Tried LabPlot recently and had issues with csv import with datetime data not really recognising date and time series format even after using advanced import options and setting it myself manually. Tried to find some solutions, the LabPlot manual website is just a bunch of youtube videos [1]. That is really not helpful, I am not browsing manual to be forced to watch clips of what I already tried. Developers really need to think about making traditional manual.

There is also a AlphaPlot, a more or less alive fork of SciDavis. Still have its own issues but still has the same issue with yyyy-MM-dd hh:mm:ss.zzz dates. Other than that it is a useful bit of kit.

But when I want to do some batch processing and generate multiple plots, automate and have it reproducible I go with gnuplot. The learning curve is steep, but after writing gnuplot scripts few time you just have a personal template and know relevant parts. It is really good.

All in all I am glad there is an opensource movement in this area. It is always better to have more options.

1. https://docs.labplot.org/en/2D_plotting/2D_plotting_xycurve....

anigbrowl 4 hours ago

Looks cool, but I wish there was a section explaining 'here's why it's better than matplotlib or [other popular charting tools]'. I looked through the feature list but I didn't feel like mentally constructing a comparison matrix. I see lots of things to like about it, but I would really appreciate case studies or something to explain why I might want to invest time in learning this new thing.

jtrueb 17 hours ago

Obviously there is a lot of work here, but I am a bit confused. If you already have lab code in Julia, Matlab, R, Python, Excel, etc., what is the motivation to use this tool? Is this hot in a specific community?

  • jabl 17 hours ago

    I suppose this is a FOSS solution for the roughly same space occupied by commercial tools like Origin, that are very popular in some scientific communities.

    They can be useful if you have other tools (e.g. measurement software) that already produces the data you want, and you just want a GUI tool to create plots, and maybe do some simple things like least squares curve fitting etc.

    If you already do a lot of data wrangling in something with a programming language and plotting libraries accessible from said language, like the ones you mention, yeah, this is not the tool for you.

    • ajot 16 hours ago

      It is! I remember using this (or SciDavis, a related project) a couple of years back in college. It was not as powerful as Origin 10 years ago, but it ran on Linux.

      This is great for people who don't know nor want to learn to program.

      • pvitz 13 hours ago

        Same experience here! We used Origin and/or QtiPlot in a physics lab for the graphs and quick regressions.

  • tonyarkles 15 hours ago

    I'm in potentially the target demographic for this. I regularly bounce between R, Python, Maxima, and occasionally MATLAB/Octave. Passing data between these is usually done using the lowest common denominator: CSV. Having four completely different interfaces to these tools is a hassle. I'm also not a big fan of Jupyter and if this feels better for me it might be a decent Jupyter replacement even without the cross-language stuff.

    • MostlyStable 13 hours ago

      I'm someone who enjoys figuring out the details of making a nice looking plot (in base R, I can't stand ggplot), but even as someone who enjoys it, LLMs are pretty much good enough that if I explain to them how I want the plot to look and how my data is structured, they can generate code that works first shot. It seems to me that, at this point, if you are already doing some coding in one of the above languages but either don't like or aren't comfortable making the plots using them, that LLMs can solve it for you. Unless they are significantly worse in the non-R options (which could be the case, It wouldn't surprise me if R has more plotting examples in the training set than the other languages).

      • REDS1736 10 hours ago

        Sorry for the off-topic question but would you mind to elaborate on why you can't stand ggplot? I personally haven't spent too much time with the r base functions but have come to absolutely adore ggplot for graphing and am thus very interested in learning about potential reasons to use r base plotting functions instead!

        • MostlyStable 9 hours ago

          I think it's just too different from base R, and I had spent too long in base R before tidyverse/ggplot became a thing. By the time it came around, I was already good enough to do basically all my plotting without it, and having to learn an entirely new set of syntax just annoyed me.

          My reaction is much more emotional than rational.

          • tonyarkles 6 hours ago

            Thanks for elaborating on that, I was wondering too. In a very similar way (with a different outcome), my first real introduction to R was due to the "A Layered Grammar of Graphics" paper while doing some tangential research during grad school. I fell in love with the abstractions in the paper and reluctantly learned R so that I could get access to ggplot :).

            As a side note, my research ended up essentially discovering Flame Graphs before Brendan Gregg was publishing/popularizing his version. His are much better than mine, but I take some comfort knowing that the ideas I was coming up with in grad school were decent!

  • analog31 7 hours ago

    In my experience, there are people out there who don't program, or who don't feel that it's a productive way of doing things. I'm firmly in the Python camp, but recognize that my workplace has several JMP licenses, and the majority of engineers are satisfied with Excel. And I never let anybody see how long it takes me to do things. ;-)

    However, those people also belong to the most-of-the-world who are still leery of "open source" or anything that doesn't come from a known brand.

    This thing could be an option for someone who wants to mess around with data but isn't comfortable mentioning it to the boss until they see for themselves if it's worthwhile.

  • wodenokoto 11 hours ago

    Haven't tried this tool yet, but if it lets me drag and drop my data and visuals, that sounds like a great addition to those tools.

  • goku12 13 hours ago

    It's the use case. Here is one concrete example. I worked as a project engineer during the development of a launch vehicle. The telemetry data frames from every test and every flight were processed into numerous CSV or TSV files that were labeled with the parameter name. Those files could be very large depending up on their sampling rates, especially for tests that lasted hours on end. You would conduct exploratory manual analysis on that data which involves:

    * Quickly cycle visually through time series graphs (often several hundred parameters). You'd have seen most of those parameters before and would quickly catch any anomalies. You can clear so much data rapidly like this.

    * Quickly analyze a graph at various zoom and pan settings. May be save some as images for inclusion in documents. Like above, the zoom and pan operations often follow each other in a matter of seconds.

    * Zoom into fine details, down to single bit levels or single sample intervals. There's surprising amount information you can glean even at these levels. I have run into freak, but useful single events at these levels. And since they're freak events, it's hard to predict in advance where they'd show up. So operation speed becomes a key factor again.

    * Plot multiple parameters (sometimes with different units) together to assess their correlation or unusual events. We used to even have team analysis sessions where such visualizations were prepared on demand.

    * Do statistical or spectral analysis (like periodograms, log or semi-log graphs, PDFs, etc)

    * Add markers or notes within the graph (usually to describe events). Change the axes or plot labels. Change grid value formatting (eg: Do you want time in seconds or HMS?).

    All the operations above are possible with Julia, Matlab, R or Python. And we did use almost all of them (depending on personal preference). But none of them suit the workflow described above for one simple reason - speed. You don't have enough time to select each parameter by text or GUI. There must be a way to either quickly launch a visualization or cycle through the parameters as the investigator closes each graph. You also don't have time to set zoom, pan and labels by text. It must be done using mouse (zoom & pan) and directly on the graph (labels and markers) in a WYSIWYG manner. And you don't want to run an FFT or a filter function, save the new series and then plot it - you want it done with a single menu selection. The difference is like using a C++ compiler vs Python in JupyterLab. The application we used was very similar to Labplot.

    Now, Excel might seem like a better choice. In fact, LabPlot and our application all has a spreadsheet-like interface with the ability to directly import CSV, TSV, etc. But Excel just doesn't cross the finish line for our requirement. For example, to plot a time series in excel, you have to select the values (column or cells), designate the axes, optionally define the axes and graph labels, start a plot, expand it to required levels and format the print. At that rate, you wouldn't finish the analysis in a month. Those applications would do all that on their own (the labels and other metadata were embedded in the data files by means of formatted comments). But an even bigger problem was the size of the data. Some of those files on import would slow down Excel to speed of molasses. The application had disk and memory level buffering to significantly improve the responsiveness to almost instant interactivity.

    I hope this gives you an idea where the tools that you mentioned are not good enough replacements for LabPlot and similar tools.

    • dima55 10 hours ago

      I'm also space and launch-vehicle adjacent. Using vnlog for data storage (like what you described, but with better tooling support) and feedgnuplot/gnuplotlib for visualization. Works great. The learning curve is really easy, you can get going and start analyzing stuff FAST. Making complex plots is fiddly, but it usually is with any tool.

    • tonyarkles 12 hours ago

      Thank you for this fantastic elaboration. I am in a very similar boat (unmanned aerospace) and have very similar needs. I’ve been chewing on making my own application to do this but LabPlot looks like it has potential to be exactly what I’ve been dreaming about for a few years.

carelyair 11 hours ago

Would be really helpful to add support for access to S3 buckets and other clouds object store. Iceberg support would also be super helpful as it is gaining lots of traction.

RedShift1 19 hours ago

Unfortunately the only database it supports is SQLite, I really wanted to hook this up directly to a database or REST API. Going back and forth between exporting files and importing them into LabPlot is just too much work...

wiradikusuma 10 hours ago

Is this like desktop version of Metabase/Superset?

ntxvega1975 17 hours ago

I can't tell what license is applicable.

  • echoangle 16 hours ago

    On https://labplot.org/frequently-asked-questions/ , under "Under what license is LabPlot released?", it says this:

    > LabPlot is licensed under GNU General Public License, version 2.0 or later, so to put it in a few sentences:

    > You are free to use LabPlot, for any purpose

    > You are free to distribute LabPlot

    > You can study how LabPlot works and change it

    > You can distribute changed versions of LabPlot

    > In the last case you have the obligation to also publish the changed source code as GPL.